2025-09-06 00:00:07.074725 | Job console starting 2025-09-06 00:00:07.089976 | Updating git repos 2025-09-06 00:00:07.165189 | Cloning repos into workspace 2025-09-06 00:00:07.399423 | Restoring repo states 2025-09-06 00:00:07.439993 | Merging changes 2025-09-06 00:00:07.440017 | Checking out repos 2025-09-06 00:00:07.957082 | Preparing playbooks 2025-09-06 00:00:08.955809 | Running Ansible setup 2025-09-06 00:00:14.125029 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-06 00:00:16.082305 | 2025-09-06 00:00:16.082849 | PLAY [Base pre] 2025-09-06 00:00:16.170106 | 2025-09-06 00:00:16.170226 | TASK [Setup log path fact] 2025-09-06 00:00:16.208521 | orchestrator | ok 2025-09-06 00:00:16.266929 | 2025-09-06 00:00:16.267061 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-06 00:00:16.324652 | orchestrator | ok 2025-09-06 00:00:16.340438 | 2025-09-06 00:00:16.340534 | TASK [emit-job-header : Print job information] 2025-09-06 00:00:16.451745 | # Job Information 2025-09-06 00:00:16.451955 | Ansible Version: 2.16.14 2025-09-06 00:00:16.451991 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-06 00:00:16.452024 | Pipeline: periodic-midnight 2025-09-06 00:00:16.452046 | Executor: 521e9411259a 2025-09-06 00:00:16.452064 | Triggered by: https://github.com/osism/testbed 2025-09-06 00:00:16.452081 | Event ID: 51270a9e064f44e48926813dfe3f9e45 2025-09-06 00:00:16.463484 | 2025-09-06 00:00:16.463576 | LOOP [emit-job-header : Print node information] 2025-09-06 00:00:16.834572 | orchestrator | ok: 2025-09-06 00:00:16.834705 | orchestrator | # Node Information 2025-09-06 00:00:16.834732 | orchestrator | Inventory Hostname: orchestrator 2025-09-06 00:00:16.834752 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-06 00:00:16.834771 | orchestrator | Username: zuul-testbed04 2025-09-06 00:00:16.834820 | orchestrator | Distro: Debian 12.11 2025-09-06 00:00:16.834856 | orchestrator | Provider: static-testbed 2025-09-06 00:00:16.834875 | orchestrator | Region: 2025-09-06 00:00:16.834928 | orchestrator | Label: testbed-orchestrator 2025-09-06 00:00:16.834953 | orchestrator | Product Name: OpenStack Nova 2025-09-06 00:00:16.834971 | orchestrator | Interface IP: 81.163.193.140 2025-09-06 00:00:16.854870 | 2025-09-06 00:00:16.856358 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-06 00:00:18.188569 | orchestrator -> localhost | changed 2025-09-06 00:00:18.195580 | 2025-09-06 00:00:18.195681 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-06 00:00:20.458527 | orchestrator -> localhost | changed 2025-09-06 00:00:20.470212 | 2025-09-06 00:00:20.470306 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-06 00:00:21.127569 | orchestrator -> localhost | ok 2025-09-06 00:00:21.134018 | 2025-09-06 00:00:21.134138 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-06 00:00:21.171511 | orchestrator | ok 2025-09-06 00:00:21.197010 | orchestrator | included: /var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-06 00:00:21.203651 | 2025-09-06 00:00:21.203734 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-06 00:00:25.970100 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-06 00:00:25.970252 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/71ef850d7ba44e3781b4c25afea98073_id_rsa 2025-09-06 00:00:25.970282 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/71ef850d7ba44e3781b4c25afea98073_id_rsa.pub 2025-09-06 00:00:25.970305 | orchestrator -> localhost | The key fingerprint is: 2025-09-06 00:00:25.970327 | orchestrator -> localhost | SHA256:oP670tEOEQd8pMXbWW8TeV4vPjriFJp4PosIfhFd8/s zuul-build-sshkey 2025-09-06 00:00:25.970346 | orchestrator -> localhost | The key's randomart image is: 2025-09-06 00:00:25.970371 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-06 00:00:25.970390 | orchestrator -> localhost | | ..+o . | 2025-09-06 00:00:25.970409 | orchestrator -> localhost | | oo* . o o| 2025-09-06 00:00:25.970426 | orchestrator -> localhost | | .o= = o . +o| 2025-09-06 00:00:25.970442 | orchestrator -> localhost | | ..o.. + = o| 2025-09-06 00:00:25.970458 | orchestrator -> localhost | | .. oS .. o o | 2025-09-06 00:00:25.970481 | orchestrator -> localhost | | .. o..o.. o | 2025-09-06 00:00:25.970499 | orchestrator -> localhost | | . .o.++ .. . . | 2025-09-06 00:00:25.970516 | orchestrator -> localhost | | . .oo.+o.. E | 2025-09-06 00:00:25.970533 | orchestrator -> localhost | | ....=oo+.. . | 2025-09-06 00:00:25.970549 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-06 00:00:25.970586 | orchestrator -> localhost | ok: Runtime: 0:00:03.681782 2025-09-06 00:00:25.976785 | 2025-09-06 00:00:25.976872 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-06 00:00:25.993774 | orchestrator | ok 2025-09-06 00:00:26.026337 | orchestrator | included: /var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-06 00:00:26.036559 | 2025-09-06 00:00:26.036647 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-06 00:00:26.061359 | orchestrator | skipping: Conditional result was False 2025-09-06 00:00:26.076871 | 2025-09-06 00:00:26.076994 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-06 00:00:26.778886 | orchestrator | changed 2025-09-06 00:00:26.784757 | 2025-09-06 00:00:26.784834 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-06 00:00:27.085860 | orchestrator | ok 2025-09-06 00:00:27.091135 | 2025-09-06 00:00:27.091214 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-06 00:00:27.544212 | orchestrator | ok 2025-09-06 00:00:27.549140 | 2025-09-06 00:00:27.549222 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-06 00:00:28.010224 | orchestrator | ok 2025-09-06 00:00:28.020360 | 2025-09-06 00:00:28.020448 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-06 00:00:28.055232 | orchestrator | skipping: Conditional result was False 2025-09-06 00:00:28.060992 | 2025-09-06 00:00:28.061080 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-06 00:00:28.927840 | orchestrator -> localhost | changed 2025-09-06 00:00:28.944693 | 2025-09-06 00:00:28.944796 | TASK [add-build-sshkey : Add back temp key] 2025-09-06 00:00:29.819303 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/71ef850d7ba44e3781b4c25afea98073_id_rsa (zuul-build-sshkey) 2025-09-06 00:00:29.819500 | orchestrator -> localhost | ok: Runtime: 0:00:00.008101 2025-09-06 00:00:29.827324 | 2025-09-06 00:00:29.827419 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-06 00:00:30.281298 | orchestrator | ok 2025-09-06 00:00:30.287890 | 2025-09-06 00:00:30.287994 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-06 00:00:30.311765 | orchestrator | skipping: Conditional result was False 2025-09-06 00:00:30.356288 | 2025-09-06 00:00:30.356416 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-06 00:00:30.873414 | orchestrator | ok 2025-09-06 00:00:30.882600 | 2025-09-06 00:00:30.882694 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-06 00:00:30.952880 | orchestrator | ok 2025-09-06 00:00:30.958735 | 2025-09-06 00:00:30.958816 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-06 00:00:31.516592 | orchestrator -> localhost | ok 2025-09-06 00:00:31.524401 | 2025-09-06 00:00:31.524488 | TASK [validate-host : Collect information about the host] 2025-09-06 00:00:33.037986 | orchestrator | ok 2025-09-06 00:00:33.057822 | 2025-09-06 00:00:33.057946 | TASK [validate-host : Sanitize hostname] 2025-09-06 00:00:33.170270 | orchestrator | ok 2025-09-06 00:00:33.174598 | 2025-09-06 00:00:33.174679 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-06 00:00:34.253360 | orchestrator -> localhost | changed 2025-09-06 00:00:34.263101 | 2025-09-06 00:00:34.263202 | TASK [validate-host : Collect information about zuul worker] 2025-09-06 00:00:34.973969 | orchestrator | ok 2025-09-06 00:00:34.980070 | 2025-09-06 00:00:34.980167 | TASK [validate-host : Write out all zuul information for each host] 2025-09-06 00:00:35.677734 | orchestrator -> localhost | changed 2025-09-06 00:00:35.691586 | 2025-09-06 00:00:35.691679 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-06 00:00:35.990490 | orchestrator | ok 2025-09-06 00:00:35.995429 | 2025-09-06 00:00:35.995509 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-06 00:01:09.726013 | orchestrator | changed: 2025-09-06 00:01:09.726241 | orchestrator | .d..t...... src/ 2025-09-06 00:01:09.726277 | orchestrator | .d..t...... src/github.com/ 2025-09-06 00:01:09.726302 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-06 00:01:09.726324 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-06 00:01:09.726344 | orchestrator | RedHat.yml 2025-09-06 00:01:09.745017 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-06 00:01:09.745035 | orchestrator | RedHat.yml 2025-09-06 00:01:09.745087 | orchestrator | = 1.53.0"... 2025-09-06 00:01:27.762887 | orchestrator | 00:01:27.762 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-06 00:01:27.795133 | orchestrator | 00:01:27.794 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-06 00:01:27.961022 | orchestrator | 00:01:27.960 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-06 00:01:28.644008 | orchestrator | 00:01:28.643 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-06 00:01:28.721013 | orchestrator | 00:01:28.720 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-06 00:01:29.185674 | orchestrator | 00:01:29.185 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-06 00:01:29.399428 | orchestrator | 00:01:29.399 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-06 00:01:30.172706 | orchestrator | 00:01:30.172 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-06 00:01:30.172801 | orchestrator | 00:01:30.172 STDOUT terraform: Providers are signed by their developers. 2025-09-06 00:01:30.172809 | orchestrator | 00:01:30.172 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-06 00:01:30.172814 | orchestrator | 00:01:30.172 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-06 00:01:30.172817 | orchestrator | 00:01:30.172 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-06 00:01:30.172825 | orchestrator | 00:01:30.172 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-06 00:01:30.172831 | orchestrator | 00:01:30.172 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-06 00:01:30.172835 | orchestrator | 00:01:30.172 STDOUT terraform: you run "tofu init" in the future. 2025-09-06 00:01:30.172840 | orchestrator | 00:01:30.172 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-06 00:01:30.172844 | orchestrator | 00:01:30.172 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-06 00:01:30.172847 | orchestrator | 00:01:30.172 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-06 00:01:30.172852 | orchestrator | 00:01:30.172 STDOUT terraform: should now work. 2025-09-06 00:01:30.172877 | orchestrator | 00:01:30.172 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-06 00:01:30.172882 | orchestrator | 00:01:30.172 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-06 00:01:30.172895 | orchestrator | 00:01:30.172 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-06 00:01:30.297390 | orchestrator | 00:01:30.295 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-06 00:01:30.297450 | orchestrator | 00:01:30.295 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-06 00:01:30.504390 | orchestrator | 00:01:30.504 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-06 00:01:30.504449 | orchestrator | 00:01:30.504 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-06 00:01:30.504456 | orchestrator | 00:01:30.504 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-06 00:01:30.504461 | orchestrator | 00:01:30.504 STDOUT terraform: for this configuration. 2025-09-06 00:01:30.650991 | orchestrator | 00:01:30.649 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-06 00:01:30.651049 | orchestrator | 00:01:30.649 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-06 00:01:30.777175 | orchestrator | 00:01:30.777 STDOUT terraform: ci.auto.tfvars 2025-09-06 00:01:30.786102 | orchestrator | 00:01:30.782 STDOUT terraform: default_custom.tf 2025-09-06 00:01:30.921053 | orchestrator | 00:01:30.920 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-06 00:01:31.875907 | orchestrator | 00:01:31.875 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-06 00:01:32.448715 | orchestrator | 00:01:32.448 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-06 00:01:32.778100 | orchestrator | 00:01:32.777 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-06 00:01:32.778168 | orchestrator | 00:01:32.777 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-06 00:01:32.778176 | orchestrator | 00:01:32.777 STDOUT terraform:  + create 2025-09-06 00:01:32.778183 | orchestrator | 00:01:32.777 STDOUT terraform:  <= read (data resources) 2025-09-06 00:01:32.780436 | orchestrator | 00:01:32.777 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-06 00:01:32.780459 | orchestrator | 00:01:32.778 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-06 00:01:32.780465 | orchestrator | 00:01:32.778 STDOUT terraform:  # (config refers to values not yet known) 2025-09-06 00:01:32.780470 | orchestrator | 00:01:32.778 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-06 00:01:32.780476 | orchestrator | 00:01:32.778 STDOUT terraform:  + checksum = (known after apply) 2025-09-06 00:01:32.780481 | orchestrator | 00:01:32.778 STDOUT terraform:  + created_at = (known after apply) 2025-09-06 00:01:32.780486 | orchestrator | 00:01:32.778 STDOUT terraform:  + file = (known after apply) 2025-09-06 00:01:32.780491 | orchestrator | 00:01:32.778 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.780495 | orchestrator | 00:01:32.778 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.780514 | orchestrator | 00:01:32.778 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-06 00:01:32.780519 | orchestrator | 00:01:32.778 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-06 00:01:32.780524 | orchestrator | 00:01:32.778 STDOUT terraform:  + most_recent = true 2025-09-06 00:01:32.780528 | orchestrator | 00:01:32.778 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.780533 | orchestrator | 00:01:32.778 STDOUT terraform:  + protected = (known after apply) 2025-09-06 00:01:32.780538 | orchestrator | 00:01:32.778 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.780543 | orchestrator | 00:01:32.778 STDOUT terraform:  + schema = (known after apply) 2025-09-06 00:01:32.780548 | orchestrator | 00:01:32.778 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-06 00:01:32.780552 | orchestrator | 00:01:32.778 STDOUT terraform:  + tags = (known after apply) 2025-09-06 00:01:32.780557 | orchestrator | 00:01:32.778 STDOUT terraform:  + updated_at = (known after apply) 2025-09-06 00:01:32.780562 | orchestrator | 00:01:32.778 STDOUT terraform:  } 2025-09-06 00:01:32.780570 | orchestrator | 00:01:32.778 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-06 00:01:32.780575 | orchestrator | 00:01:32.778 STDOUT terraform:  # (config refers to values not yet known) 2025-09-06 00:01:32.780579 | orchestrator | 00:01:32.778 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-06 00:01:32.780584 | orchestrator | 00:01:32.778 STDOUT terraform:  + checksum = (known after apply) 2025-09-06 00:01:32.780589 | orchestrator | 00:01:32.779 STDOUT terraform:  + created_at = (known after apply) 2025-09-06 00:01:32.780598 | orchestrator | 00:01:32.779 STDOUT terraform:  + file = (known after apply) 2025-09-06 00:01:32.780603 | orchestrator | 00:01:32.779 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.780607 | orchestrator | 00:01:32.779 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.780612 | orchestrator | 00:01:32.779 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-06 00:01:32.780617 | orchestrator | 00:01:32.779 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-06 00:01:32.780621 | orchestrator | 00:01:32.779 STDOUT terraform:  + most_recent = true 2025-09-06 00:01:32.780626 | orchestrator | 00:01:32.779 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.780631 | orchestrator | 00:01:32.779 STDOUT terraform:  + protected = (known after apply) 2025-09-06 00:01:32.780635 | orchestrator | 00:01:32.779 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.780640 | orchestrator | 00:01:32.779 STDOUT terraform:  + schema = (known after apply) 2025-09-06 00:01:32.780644 | orchestrator | 00:01:32.779 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-06 00:01:32.780649 | orchestrator | 00:01:32.779 STDOUT terraform:  + tags = (known after apply) 2025-09-06 00:01:32.780654 | orchestrator | 00:01:32.779 STDOUT terraform:  + updated_at = (known after apply) 2025-09-06 00:01:32.780669 | orchestrator | 00:01:32.779 STDOUT terraform:  } 2025-09-06 00:01:32.780674 | orchestrator | 00:01:32.779 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-06 00:01:32.780683 | orchestrator | 00:01:32.779 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-06 00:01:32.780687 | orchestrator | 00:01:32.779 STDOUT terraform:  + content = (known after apply) 2025-09-06 00:01:32.780692 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-06 00:01:32.780697 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-06 00:01:32.780701 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-06 00:01:32.780706 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-06 00:01:32.780710 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-06 00:01:32.780715 | orchestrator | 00:01:32.779 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-06 00:01:32.780720 | orchestrator | 00:01:32.779 STDOUT terraform:  + directory_permission = "0777" 2025-09-06 00:01:32.780724 | orchestrator | 00:01:32.779 STDOUT terraform:  + file_permission = "0644" 2025-09-06 00:01:32.780729 | orchestrator | 00:01:32.779 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-06 00:01:32.780734 | orchestrator | 00:01:32.779 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.780738 | orchestrator | 00:01:32.779 STDOUT terraform:  } 2025-09-06 00:01:32.780743 | orchestrator | 00:01:32.779 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-06 00:01:32.780747 | orchestrator | 00:01:32.779 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-06 00:01:32.780752 | orchestrator | 00:01:32.780 STDOUT terraform:  + content = (known after apply) 2025-09-06 00:01:32.780756 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-06 00:01:32.780761 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-06 00:01:32.780765 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-06 00:01:32.780770 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-06 00:01:32.780774 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-06 00:01:32.780782 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-06 00:01:32.780787 | orchestrator | 00:01:32.780 STDOUT terraform:  + directory_permission = "0777" 2025-09-06 00:01:32.780791 | orchestrator | 00:01:32.780 STDOUT terraform:  + file_permission = "0644" 2025-09-06 00:01:32.780796 | orchestrator | 00:01:32.780 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-06 00:01:32.780800 | orchestrator | 00:01:32.780 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.780805 | orchestrator | 00:01:32.780 STDOUT terraform:  } 2025-09-06 00:01:32.782039 | orchestrator | 00:01:32.780 STDOUT terraform:  # local_file.inventory will be created 2025-09-06 00:01:32.782091 | orchestrator | 00:01:32.780 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-06 00:01:32.782097 | orchestrator | 00:01:32.780 STDOUT terraform:  + content = (known after apply) 2025-09-06 00:01:32.782114 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-06 00:01:32.782118 | orchestrator | 00:01:32.780 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-06 00:01:32.782122 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-06 00:01:32.782126 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-06 00:01:32.782130 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-06 00:01:32.782134 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-06 00:01:32.782139 | orchestrator | 00:01:32.781 STDOUT terraform:  + directory_permission = "0777" 2025-09-06 00:01:32.782143 | orchestrator | 00:01:32.781 STDOUT terraform:  + file_permission = "0644" 2025-09-06 00:01:32.782147 | orchestrator | 00:01:32.781 STDOUT terraform:  + filename = "inventory.ci" 2025-09-06 00:01:32.782151 | orchestrator | 00:01:32.781 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.782155 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-06 00:01:32.782159 | orchestrator | 00:01:32.781 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-06 00:01:32.782163 | orchestrator | 00:01:32.781 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-06 00:01:32.782168 | orchestrator | 00:01:32.781 STDOUT terraform:  + content = (sensitive value) 2025-09-06 00:01:32.782172 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-06 00:01:32.782176 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-06 00:01:32.782180 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-06 00:01:32.782184 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-06 00:01:32.782188 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-06 00:01:32.782191 | orchestrator | 00:01:32.781 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-06 00:01:32.782195 | orchestrator | 00:01:32.781 STDOUT terraform:  + directory_permission = "0700" 2025-09-06 00:01:32.782199 | orchestrator | 00:01:32.781 STDOUT terraform:  + file_permission = "0600" 2025-09-06 00:01:32.782203 | orchestrator | 00:01:32.781 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-06 00:01:32.782212 | orchestrator | 00:01:32.781 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.782216 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-06 00:01:32.782219 | orchestrator | 00:01:32.781 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-06 00:01:32.782223 | orchestrator | 00:01:32.781 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-06 00:01:32.782227 | orchestrator | 00:01:32.781 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.782231 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-06 00:01:32.782235 | orchestrator | 00:01:32.781 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-06 00:01:32.782243 | orchestrator | 00:01:32.781 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-06 00:01:32.782254 | orchestrator | 00:01:32.781 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.782258 | orchestrator | 00:01:32.781 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.782262 | orchestrator | 00:01:32.782 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.782385 | orchestrator | 00:01:32.782 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.782749 | orchestrator | 00:01:32.782 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.783073 | orchestrator | 00:01:32.782 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-06 00:01:32.783310 | orchestrator | 00:01:32.783 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.783503 | orchestrator | 00:01:32.783 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.783737 | orchestrator | 00:01:32.783 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.783979 | orchestrator | 00:01:32.783 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.784167 | orchestrator | 00:01:32.783 STDOUT terraform:  } 2025-09-06 00:01:32.784700 | orchestrator | 00:01:32.784 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-06 00:01:32.785037 | orchestrator | 00:01:32.784 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.785388 | orchestrator | 00:01:32.785 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.785585 | orchestrator | 00:01:32.785 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.785840 | orchestrator | 00:01:32.785 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.786158 | orchestrator | 00:01:32.785 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.786487 | orchestrator | 00:01:32.786 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.786938 | orchestrator | 00:01:32.786 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-06 00:01:32.787179 | orchestrator | 00:01:32.786 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.787321 | orchestrator | 00:01:32.787 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.787475 | orchestrator | 00:01:32.787 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.787657 | orchestrator | 00:01:32.787 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.787748 | orchestrator | 00:01:32.787 STDOUT terraform:  } 2025-09-06 00:01:32.788128 | orchestrator | 00:01:32.787 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-06 00:01:32.788658 | orchestrator | 00:01:32.788 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.789057 | orchestrator | 00:01:32.788 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.789250 | orchestrator | 00:01:32.789 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.789584 | orchestrator | 00:01:32.789 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.789796 | orchestrator | 00:01:32.789 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.789932 | orchestrator | 00:01:32.789 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.789974 | orchestrator | 00:01:32.789 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-06 00:01:32.790004 | orchestrator | 00:01:32.789 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.790039 | orchestrator | 00:01:32.789 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.790062 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.790086 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.790094 | orchestrator | 00:01:32.790 STDOUT terraform:  } 2025-09-06 00:01:32.790156 | orchestrator | 00:01:32.790 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-06 00:01:32.790189 | orchestrator | 00:01:32.790 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.790225 | orchestrator | 00:01:32.790 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.790250 | orchestrator | 00:01:32.790 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.790283 | orchestrator | 00:01:32.790 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.790317 | orchestrator | 00:01:32.790 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.790352 | orchestrator | 00:01:32.790 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.790395 | orchestrator | 00:01:32.790 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-06 00:01:32.790431 | orchestrator | 00:01:32.790 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.790450 | orchestrator | 00:01:32.790 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.790473 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.790498 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.790520 | orchestrator | 00:01:32.790 STDOUT terraform:  } 2025-09-06 00:01:32.790558 | orchestrator | 00:01:32.790 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-06 00:01:32.790600 | orchestrator | 00:01:32.790 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.790634 | orchestrator | 00:01:32.790 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.790658 | orchestrator | 00:01:32.790 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.790692 | orchestrator | 00:01:32.790 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.790726 | orchestrator | 00:01:32.790 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.790760 | orchestrator | 00:01:32.790 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.790802 | orchestrator | 00:01:32.790 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-06 00:01:32.790835 | orchestrator | 00:01:32.790 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.790867 | orchestrator | 00:01:32.790 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.790890 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.790910 | orchestrator | 00:01:32.790 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.790916 | orchestrator | 00:01:32.790 STDOUT terraform:  } 2025-09-06 00:01:32.791031 | orchestrator | 00:01:32.790 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-06 00:01:32.791064 | orchestrator | 00:01:32.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.791099 | orchestrator | 00:01:32.791 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.791121 | orchestrator | 00:01:32.791 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.791157 | orchestrator | 00:01:32.791 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.791193 | orchestrator | 00:01:32.791 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.791228 | orchestrator | 00:01:32.791 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.791271 | orchestrator | 00:01:32.791 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-06 00:01:32.791304 | orchestrator | 00:01:32.791 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.791324 | orchestrator | 00:01:32.791 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.791349 | orchestrator | 00:01:32.791 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.791372 | orchestrator | 00:01:32.791 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.791378 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-06 00:01:32.791427 | orchestrator | 00:01:32.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-06 00:01:32.791472 | orchestrator | 00:01:32.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-06 00:01:32.791505 | orchestrator | 00:01:32.791 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.791528 | orchestrator | 00:01:32.791 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.791566 | orchestrator | 00:01:32.791 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.791597 | orchestrator | 00:01:32.791 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.791631 | orchestrator | 00:01:32.791 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.791673 | orchestrator | 00:01:32.791 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-06 00:01:32.791707 | orchestrator | 00:01:32.791 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.791727 | orchestrator | 00:01:32.791 STDOUT terraform:  + size = 80 2025-09-06 00:01:32.791750 | orchestrator | 00:01:32.791 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.791773 | orchestrator | 00:01:32.791 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.791779 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-06 00:01:32.791825 | orchestrator | 00:01:32.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-06 00:01:32.791895 | orchestrator | 00:01:32.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.791929 | orchestrator | 00:01:32.791 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.791952 | orchestrator | 00:01:32.791 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.791990 | orchestrator | 00:01:32.791 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.792025 | orchestrator | 00:01:32.791 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.792062 | orchestrator | 00:01:32.792 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-06 00:01:32.792097 | orchestrator | 00:01:32.792 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.792121 | orchestrator | 00:01:32.792 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.792144 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.792167 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.792174 | orchestrator | 00:01:32.792 STDOUT terraform:  } 2025-09-06 00:01:32.792247 | orchestrator | 00:01:32.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-06 00:01:32.792288 | orchestrator | 00:01:32.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.792323 | orchestrator | 00:01:32.792 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.792346 | orchestrator | 00:01:32.792 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.792384 | orchestrator | 00:01:32.792 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.792427 | orchestrator | 00:01:32.792 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.792467 | orchestrator | 00:01:32.792 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-06 00:01:32.792495 | orchestrator | 00:01:32.792 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.792516 | orchestrator | 00:01:32.792 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.792549 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.792556 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.792583 | orchestrator | 00:01:32.792 STDOUT terraform:  } 2025-09-06 00:01:32.792619 | orchestrator | 00:01:32.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-06 00:01:32.792661 | orchestrator | 00:01:32.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.792697 | orchestrator | 00:01:32.792 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.792718 | orchestrator | 00:01:32.792 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.792752 | orchestrator | 00:01:32.792 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.792785 | orchestrator | 00:01:32.792 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.792825 | orchestrator | 00:01:32.792 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-06 00:01:32.792871 | orchestrator | 00:01:32.792 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.792888 | orchestrator | 00:01:32.792 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.792910 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.792934 | orchestrator | 00:01:32.792 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.792940 | orchestrator | 00:01:32.792 STDOUT terraform:  } 2025-09-06 00:01:32.792989 | orchestrator | 00:01:32.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-06 00:01:32.793031 | orchestrator | 00:01:32.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.793067 | orchestrator | 00:01:32.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.793089 | orchestrator | 00:01:32.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.793120 | orchestrator | 00:01:32.793 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.793151 | orchestrator | 00:01:32.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.793188 | orchestrator | 00:01:32.793 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-06 00:01:32.793222 | orchestrator | 00:01:32.793 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.793244 | orchestrator | 00:01:32.793 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.793269 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.793293 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.793300 | orchestrator | 00:01:32.793 STDOUT terraform:  } 2025-09-06 00:01:32.793348 | orchestrator | 00:01:32.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-06 00:01:32.793388 | orchestrator | 00:01:32.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.793424 | orchestrator | 00:01:32.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.793447 | orchestrator | 00:01:32.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.793481 | orchestrator | 00:01:32.793 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.793515 | orchestrator | 00:01:32.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.793556 | orchestrator | 00:01:32.793 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-06 00:01:32.793587 | orchestrator | 00:01:32.793 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.793598 | orchestrator | 00:01:32.793 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.793626 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.793649 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.793655 | orchestrator | 00:01:32.793 STDOUT terraform:  } 2025-09-06 00:01:32.793702 | orchestrator | 00:01:32.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-06 00:01:32.793744 | orchestrator | 00:01:32.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.793778 | orchestrator | 00:01:32.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.793802 | orchestrator | 00:01:32.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.793840 | orchestrator | 00:01:32.793 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.793875 | orchestrator | 00:01:32.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.794086 | orchestrator | 00:01:32.793 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-06 00:01:32.794166 | orchestrator | 00:01:32.793 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.794182 | orchestrator | 00:01:32.793 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.794194 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.794206 | orchestrator | 00:01:32.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.794229 | orchestrator | 00:01:32.793 STDOUT terraform:  } 2025-09-06 00:01:32.794241 | orchestrator | 00:01:32.794 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-06 00:01:32.794757 | orchestrator | 00:01:32.794 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.794798 | orchestrator | 00:01:32.794 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.797469 | orchestrator | 00:01:32.794 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.797514 | orchestrator | 00:01:32.795 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.797530 | orchestrator | 00:01:32.795 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.797542 | orchestrator | 00:01:32.795 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-06 00:01:32.797553 | orchestrator | 00:01:32.795 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.797564 | orchestrator | 00:01:32.796 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.797575 | orchestrator | 00:01:32.796 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.797586 | orchestrator | 00:01:32.796 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.797597 | orchestrator | 00:01:32.796 STDOUT terraform:  } 2025-09-06 00:01:32.797608 | orchestrator | 00:01:32.796 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-06 00:01:32.797626 | orchestrator | 00:01:32.797 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.802070 | orchestrator | 00:01:32.797 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.802106 | orchestrator | 00:01:32.797 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.802117 | orchestrator | 00:01:32.798 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.802127 | orchestrator | 00:01:32.798 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.802137 | orchestrator | 00:01:32.798 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-06 00:01:32.802146 | orchestrator | 00:01:32.798 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.802173 | orchestrator | 00:01:32.799 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.802184 | orchestrator | 00:01:32.799 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.802194 | orchestrator | 00:01:32.799 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.802204 | orchestrator | 00:01:32.799 STDOUT terraform:  } 2025-09-06 00:01:32.802214 | orchestrator | 00:01:32.799 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-06 00:01:32.802224 | orchestrator | 00:01:32.800 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-06 00:01:32.802233 | orchestrator | 00:01:32.800 STDOUT terraform:  + attachment = (known after apply) 2025-09-06 00:01:32.802243 | orchestrator | 00:01:32.800 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.802252 | orchestrator | 00:01:32.800 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.802262 | orchestrator | 00:01:32.801 STDOUT terraform:  + metadata = (known after apply) 2025-09-06 00:01:32.802271 | orchestrator | 00:01:32.801 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-06 00:01:32.802287 | orchestrator | 00:01:32.801 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.802378 | orchestrator | 00:01:32.802 STDOUT terraform:  + size = 20 2025-09-06 00:01:32.802462 | orchestrator | 00:01:32.802 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-06 00:01:32.802477 | orchestrator | 00:01:32.802 STDOUT terraform:  + volume_type = "ssd" 2025-09-06 00:01:32.802490 | orchestrator | 00:01:32.802 STDOUT terraform:  } 2025-09-06 00:01:32.802538 | orchestrator | 00:01:32.802 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-06 00:01:32.802581 | orchestrator | 00:01:32.802 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-06 00:01:32.802614 | orchestrator | 00:01:32.802 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.802651 | orchestrator | 00:01:32.802 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.802684 | orchestrator | 00:01:32.802 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.802718 | orchestrator | 00:01:32.802 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.802732 | orchestrator | 00:01:32.802 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.802758 | orchestrator | 00:01:32.802 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.802788 | orchestrator | 00:01:32.802 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.802822 | orchestrator | 00:01:32.802 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.802837 | orchestrator | 00:01:32.802 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-06 00:01:32.802885 | orchestrator | 00:01:32.802 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.802900 | orchestrator | 00:01:32.802 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.802944 | orchestrator | 00:01:32.802 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.802977 | orchestrator | 00:01:32.802 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.803011 | orchestrator | 00:01:32.802 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.803026 | orchestrator | 00:01:32.803 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.803062 | orchestrator | 00:01:32.803 STDOUT terraform:  + name = "testbed-manager" 2025-09-06 00:01:32.803077 | orchestrator | 00:01:32.803 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.803114 | orchestrator | 00:01:32.803 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.803148 | orchestrator | 00:01:32.803 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.803162 | orchestrator | 00:01:32.803 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.803202 | orchestrator | 00:01:32.803 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.803233 | orchestrator | 00:01:32.803 STDOUT terraform:  + user_data = (sensitive value) 2025-09-06 00:01:32.803247 | orchestrator | 00:01:32.803 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.803260 | orchestrator | 00:01:32.803 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.803293 | orchestrator | 00:01:32.803 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.803328 | orchestrator | 00:01:32.803 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.803342 | orchestrator | 00:01:32.803 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.803373 | orchestrator | 00:01:32.803 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.803498 | orchestrator | 00:01:32.803 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.803512 | orchestrator | 00:01:32.803 STDOUT terraform:  } 2025-09-06 00:01:32.803523 | orchestrator | 00:01:32.803 STDOUT terraform:  + network { 2025-09-06 00:01:32.803533 | orchestrator | 00:01:32.803 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.803543 | orchestrator | 00:01:32.803 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.803558 | orchestrator | 00:01:32.803 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.803572 | orchestrator | 00:01:32.803 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.803589 | orchestrator | 00:01:32.803 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.803602 | orchestrator | 00:01:32.803 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.803615 | orchestrator | 00:01:32.803 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.803624 | orchestrator | 00:01:32.803 STDOUT terraform:  } 2025-09-06 00:01:32.803637 | orchestrator | 00:01:32.803 STDOUT terraform:  } 2025-09-06 00:01:32.803678 | orchestrator | 00:01:32.803 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-06 00:01:32.803716 | orchestrator | 00:01:32.803 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.803750 | orchestrator | 00:01:32.803 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.803783 | orchestrator | 00:01:32.803 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.803817 | orchestrator | 00:01:32.803 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.803904 | orchestrator | 00:01:32.803 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.803918 | orchestrator | 00:01:32.803 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.803930 | orchestrator | 00:01:32.803 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.803943 | orchestrator | 00:01:32.803 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.803978 | orchestrator | 00:01:32.803 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.804007 | orchestrator | 00:01:32.803 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.804021 | orchestrator | 00:01:32.803 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.804060 | orchestrator | 00:01:32.804 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.804092 | orchestrator | 00:01:32.804 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.804126 | orchestrator | 00:01:32.804 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.804160 | orchestrator | 00:01:32.804 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.804183 | orchestrator | 00:01:32.804 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.804214 | orchestrator | 00:01:32.804 STDOUT terraform:  + name = "testbed-node-0" 2025-09-06 00:01:32.804239 | orchestrator | 00:01:32.804 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.804273 | orchestrator | 00:01:32.804 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.804307 | orchestrator | 00:01:32.804 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.804331 | orchestrator | 00:01:32.804 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.804365 | orchestrator | 00:01:32.804 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.804416 | orchestrator | 00:01:32.804 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.804428 | orchestrator | 00:01:32.804 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.804446 | orchestrator | 00:01:32.804 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.804474 | orchestrator | 00:01:32.804 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.804503 | orchestrator | 00:01:32.804 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.804529 | orchestrator | 00:01:32.804 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.804560 | orchestrator | 00:01:32.804 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.804597 | orchestrator | 00:01:32.804 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.804609 | orchestrator | 00:01:32.804 STDOUT terraform:  } 2025-09-06 00:01:32.804619 | orchestrator | 00:01:32.804 STDOUT terraform:  + network { 2025-09-06 00:01:32.804630 | orchestrator | 00:01:32.804 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.804670 | orchestrator | 00:01:32.804 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.804694 | orchestrator | 00:01:32.804 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.804727 | orchestrator | 00:01:32.804 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.804756 | orchestrator | 00:01:32.804 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.804786 | orchestrator | 00:01:32.804 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.804815 | orchestrator | 00:01:32.804 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.804827 | orchestrator | 00:01:32.804 STDOUT terraform:  } 2025-09-06 00:01:32.804838 | orchestrator | 00:01:32.804 STDOUT terraform:  } 2025-09-06 00:01:32.804889 | orchestrator | 00:01:32.804 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-06 00:01:32.804929 | orchestrator | 00:01:32.804 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.804963 | orchestrator | 00:01:32.804 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.804999 | orchestrator | 00:01:32.804 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.805031 | orchestrator | 00:01:32.804 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.805065 | orchestrator | 00:01:32.805 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.805078 | orchestrator | 00:01:32.805 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.805105 | orchestrator | 00:01:32.805 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.805139 | orchestrator | 00:01:32.805 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.805173 | orchestrator | 00:01:32.805 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.805201 | orchestrator | 00:01:32.805 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.805224 | orchestrator | 00:01:32.805 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.805259 | orchestrator | 00:01:32.805 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.805294 | orchestrator | 00:01:32.805 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.805328 | orchestrator | 00:01:32.805 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.805362 | orchestrator | 00:01:32.805 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.805386 | orchestrator | 00:01:32.805 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.805416 | orchestrator | 00:01:32.805 STDOUT terraform:  + name = "testbed-node-1" 2025-09-06 00:01:32.805439 | orchestrator | 00:01:32.805 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.805474 | orchestrator | 00:01:32.805 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.805511 | orchestrator | 00:01:32.805 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.805530 | orchestrator | 00:01:32.805 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.805563 | orchestrator | 00:01:32.805 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.805612 | orchestrator | 00:01:32.805 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.805631 | orchestrator | 00:01:32.805 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.805648 | orchestrator | 00:01:32.805 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.805663 | orchestrator | 00:01:32.805 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.805688 | orchestrator | 00:01:32.805 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.805714 | orchestrator | 00:01:32.805 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.805744 | orchestrator | 00:01:32.805 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.805782 | orchestrator | 00:01:32.805 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.805794 | orchestrator | 00:01:32.805 STDOUT terraform:  } 2025-09-06 00:01:32.805804 | orchestrator | 00:01:32.805 STDOUT terraform:  + network { 2025-09-06 00:01:32.805815 | orchestrator | 00:01:32.805 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.805865 | orchestrator | 00:01:32.805 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.805891 | orchestrator | 00:01:32.805 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.805922 | orchestrator | 00:01:32.805 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.805953 | orchestrator | 00:01:32.805 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.805983 | orchestrator | 00:01:32.805 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.806037 | orchestrator | 00:01:32.805 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.806125 | orchestrator | 00:01:32.806 STDOUT terraform:  } 2025-09-06 00:01:32.806173 | orchestrator | 00:01:32.806 STDOUT terraform:  } 2025-09-06 00:01:32.806223 | orchestrator | 00:01:32.806 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-06 00:01:32.806264 | orchestrator | 00:01:32.806 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.806299 | orchestrator | 00:01:32.806 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.806335 | orchestrator | 00:01:32.806 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.806369 | orchestrator | 00:01:32.806 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.806412 | orchestrator | 00:01:32.806 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.806424 | orchestrator | 00:01:32.806 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.806452 | orchestrator | 00:01:32.806 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.806488 | orchestrator | 00:01:32.806 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.806522 | orchestrator | 00:01:32.806 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.806550 | orchestrator | 00:01:32.806 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.806577 | orchestrator | 00:01:32.806 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.806613 | orchestrator | 00:01:32.806 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.806650 | orchestrator | 00:01:32.806 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.806685 | orchestrator | 00:01:32.806 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.806720 | orchestrator | 00:01:32.806 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.806747 | orchestrator | 00:01:32.806 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.806777 | orchestrator | 00:01:32.806 STDOUT terraform:  + name = "testbed-node-2" 2025-09-06 00:01:32.806801 | orchestrator | 00:01:32.806 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.806836 | orchestrator | 00:01:32.806 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.806903 | orchestrator | 00:01:32.806 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.806916 | orchestrator | 00:01:32.806 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.806949 | orchestrator | 00:01:32.806 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.806994 | orchestrator | 00:01:32.806 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.807006 | orchestrator | 00:01:32.806 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.807030 | orchestrator | 00:01:32.806 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.807059 | orchestrator | 00:01:32.807 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.807088 | orchestrator | 00:01:32.807 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.807117 | orchestrator | 00:01:32.807 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.807145 | orchestrator | 00:01:32.807 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.807182 | orchestrator | 00:01:32.807 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.807197 | orchestrator | 00:01:32.807 STDOUT terraform:  } 2025-09-06 00:01:32.807204 | orchestrator | 00:01:32.807 STDOUT terraform:  + network { 2025-09-06 00:01:32.807213 | orchestrator | 00:01:32.807 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.807252 | orchestrator | 00:01:32.807 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.807282 | orchestrator | 00:01:32.807 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.807312 | orchestrator | 00:01:32.807 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.807343 | orchestrator | 00:01:32.807 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.807371 | orchestrator | 00:01:32.807 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.807401 | orchestrator | 00:01:32.807 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.807411 | orchestrator | 00:01:32.807 STDOUT terraform:  } 2025-09-06 00:01:32.807423 | orchestrator | 00:01:32.807 STDOUT terraform:  } 2025-09-06 00:01:32.807462 | orchestrator | 00:01:32.807 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-06 00:01:32.807542 | orchestrator | 00:01:32.807 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.807567 | orchestrator | 00:01:32.807 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.807612 | orchestrator | 00:01:32.807 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.807646 | orchestrator | 00:01:32.807 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.807680 | orchestrator | 00:01:32.807 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.807705 | orchestrator | 00:01:32.807 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.807727 | orchestrator | 00:01:32.807 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.807762 | orchestrator | 00:01:32.807 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.807797 | orchestrator | 00:01:32.807 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.807827 | orchestrator | 00:01:32.807 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.807863 | orchestrator | 00:01:32.807 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.807896 | orchestrator | 00:01:32.807 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.807931 | orchestrator | 00:01:32.807 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.807965 | orchestrator | 00:01:32.807 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.807999 | orchestrator | 00:01:32.807 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.808023 | orchestrator | 00:01:32.807 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.808052 | orchestrator | 00:01:32.808 STDOUT terraform:  + name = "testbed-node-3" 2025-09-06 00:01:32.808076 | orchestrator | 00:01:32.808 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.808110 | orchestrator | 00:01:32.808 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.808144 | orchestrator | 00:01:32.808 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.808168 | orchestrator | 00:01:32.808 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.808201 | orchestrator | 00:01:32.808 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.808249 | orchestrator | 00:01:32.808 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.808264 | orchestrator | 00:01:32.808 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.808284 | orchestrator | 00:01:32.808 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.808311 | orchestrator | 00:01:32.808 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.808339 | orchestrator | 00:01:32.808 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.808366 | orchestrator | 00:01:32.808 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.808395 | orchestrator | 00:01:32.808 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.808433 | orchestrator | 00:01:32.808 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.808443 | orchestrator | 00:01:32.808 STDOUT terraform:  } 2025-09-06 00:01:32.808452 | orchestrator | 00:01:32.808 STDOUT terraform:  + network { 2025-09-06 00:01:32.808471 | orchestrator | 00:01:32.808 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.808502 | orchestrator | 00:01:32.808 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.808532 | orchestrator | 00:01:32.808 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.808563 | orchestrator | 00:01:32.808 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.808593 | orchestrator | 00:01:32.808 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.808625 | orchestrator | 00:01:32.808 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.808656 | orchestrator | 00:01:32.808 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.808670 | orchestrator | 00:01:32.808 STDOUT terraform:  } 2025-09-06 00:01:32.808681 | orchestrator | 00:01:32.808 STDOUT terraform:  } 2025-09-06 00:01:32.808719 | orchestrator | 00:01:32.808 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-06 00:01:32.808758 | orchestrator | 00:01:32.808 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.808791 | orchestrator | 00:01:32.808 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.808825 | orchestrator | 00:01:32.808 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.808908 | orchestrator | 00:01:32.808 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.808921 | orchestrator | 00:01:32.808 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.808935 | orchestrator | 00:01:32.808 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.808959 | orchestrator | 00:01:32.808 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.808977 | orchestrator | 00:01:32.808 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.808990 | orchestrator | 00:01:32.808 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.809020 | orchestrator | 00:01:32.808 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.809035 | orchestrator | 00:01:32.809 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.809073 | orchestrator | 00:01:32.809 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.809111 | orchestrator | 00:01:32.809 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.809141 | orchestrator | 00:01:32.809 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.809178 | orchestrator | 00:01:32.809 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.809193 | orchestrator | 00:01:32.809 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.809221 | orchestrator | 00:01:32.809 STDOUT terraform:  + name = "testbed-node-4" 2025-09-06 00:01:32.809237 | orchestrator | 00:01:32.809 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.810156 | orchestrator | 00:01:32.809 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816227 | orchestrator | 00:01:32.809 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.816253 | orchestrator | 00:01:32.809 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.816261 | orchestrator | 00:01:32.809 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.816266 | orchestrator | 00:01:32.809 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.816273 | orchestrator | 00:01:32.809 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.816279 | orchestrator | 00:01:32.809 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.816284 | orchestrator | 00:01:32.809 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.816289 | orchestrator | 00:01:32.809 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.816294 | orchestrator | 00:01:32.809 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.816299 | orchestrator | 00:01:32.809 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.816304 | orchestrator | 00:01:32.809 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.816309 | orchestrator | 00:01:32.809 STDOUT terraform:  } 2025-09-06 00:01:32.816314 | orchestrator | 00:01:32.809 STDOUT terraform:  + network { 2025-09-06 00:01:32.816320 | orchestrator | 00:01:32.809 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.816325 | orchestrator | 00:01:32.809 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.816330 | orchestrator | 00:01:32.809 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.816335 | orchestrator | 00:01:32.809 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.816351 | orchestrator | 00:01:32.809 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.816356 | orchestrator | 00:01:32.809 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.816361 | orchestrator | 00:01:32.809 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.816366 | orchestrator | 00:01:32.809 STDOUT terraform:  } 2025-09-06 00:01:32.816371 | orchestrator | 00:01:32.809 STDOUT terraform:  } 2025-09-06 00:01:32.816376 | orchestrator | 00:01:32.809 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-06 00:01:32.816382 | orchestrator | 00:01:32.809 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-06 00:01:32.816387 | orchestrator | 00:01:32.809 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-06 00:01:32.816392 | orchestrator | 00:01:32.809 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-06 00:01:32.816397 | orchestrator | 00:01:32.809 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-06 00:01:32.816402 | orchestrator | 00:01:32.809 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.816407 | orchestrator | 00:01:32.809 STDOUT terraform:  + availability_zone = "nova" 2025-09-06 00:01:32.816412 | orchestrator | 00:01:32.809 STDOUT terraform:  + config_drive = true 2025-09-06 00:01:32.816417 | orchestrator | 00:01:32.810 STDOUT terraform:  + created = (known after apply) 2025-09-06 00:01:32.816422 | orchestrator | 00:01:32.810 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-06 00:01:32.816434 | orchestrator | 00:01:32.810 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-06 00:01:32.816442 | orchestrator | 00:01:32.810 STDOUT terraform:  + force_delete = false 2025-09-06 00:01:32.816456 | orchestrator | 00:01:32.810 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-06 00:01:32.816462 | orchestrator | 00:01:32.810 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816467 | orchestrator | 00:01:32.810 STDOUT terraform:  + image_id = (known after apply) 2025-09-06 00:01:32.816472 | orchestrator | 00:01:32.810 STDOUT terraform:  + image_name = (known after apply) 2025-09-06 00:01:32.816477 | orchestrator | 00:01:32.810 STDOUT terraform:  + key_pair = "testbed" 2025-09-06 00:01:32.816482 | orchestrator | 00:01:32.810 STDOUT terraform:  + name = "testbed-node-5" 2025-09-06 00:01:32.816487 | orchestrator | 00:01:32.810 STDOUT terraform:  + power_state = "active" 2025-09-06 00:01:32.816492 | orchestrator | 00:01:32.810 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816497 | orchestrator | 00:01:32.810 STDOUT terraform:  + security_groups = (known after apply) 2025-09-06 00:01:32.816502 | orchestrator | 00:01:32.810 STDOUT terraform:  + stop_before_destroy = false 2025-09-06 00:01:32.816507 | orchestrator | 00:01:32.810 STDOUT terraform:  + updated = (known after apply) 2025-09-06 00:01:32.816512 | orchestrator | 00:01:32.810 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-06 00:01:32.816517 | orchestrator | 00:01:32.810 STDOUT terraform:  + block_device { 2025-09-06 00:01:32.816526 | orchestrator | 00:01:32.810 STDOUT terraform:  + boot_index = 0 2025-09-06 00:01:32.816531 | orchestrator | 00:01:32.810 STDOUT terraform:  + delete_on_termination = false 2025-09-06 00:01:32.816536 | orchestrator | 00:01:32.810 STDOUT terraform:  + destination_type = "volume" 2025-09-06 00:01:32.816541 | orchestrator | 00:01:32.810 STDOUT terraform:  + multiattach = false 2025-09-06 00:01:32.816546 | orchestrator | 00:01:32.810 STDOUT terraform:  + source_type = "volume" 2025-09-06 00:01:32.816551 | orchestrator | 00:01:32.810 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.816556 | orchestrator | 00:01:32.810 STDOUT terraform:  } 2025-09-06 00:01:32.816561 | orchestrator | 00:01:32.810 STDOUT terraform:  + network { 2025-09-06 00:01:32.816566 | orchestrator | 00:01:32.810 STDOUT terraform:  + access_network = false 2025-09-06 00:01:32.816571 | orchestrator | 00:01:32.810 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-06 00:01:32.816576 | orchestrator | 00:01:32.810 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-06 00:01:32.816581 | orchestrator | 00:01:32.810 STDOUT terraform:  + mac = (known after apply) 2025-09-06 00:01:32.816586 | orchestrator | 00:01:32.810 STDOUT terraform:  + name = (known after apply) 2025-09-06 00:01:32.816591 | orchestrator | 00:01:32.810 STDOUT terraform:  + port = (known after apply) 2025-09-06 00:01:32.816596 | orchestrator | 00:01:32.810 STDOUT terraform:  + uuid = (known after apply) 2025-09-06 00:01:32.816601 | orchestrator | 00:01:32.810 STDOUT terraform:  } 2025-09-06 00:01:32.816606 | orchestrator | 00:01:32.810 STDOUT terraform:  } 2025-09-06 00:01:32.816611 | orchestrator | 00:01:32.810 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-06 00:01:32.816616 | orchestrator | 00:01:32.810 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-06 00:01:32.816621 | orchestrator | 00:01:32.810 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-06 00:01:32.816626 | orchestrator | 00:01:32.810 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816631 | orchestrator | 00:01:32.810 STDOUT terraform:  + name = "testbed" 2025-09-06 00:01:32.816636 | orchestrator | 00:01:32.810 STDOUT terraform:  + private_key = (sensitive value) 2025-09-06 00:01:32.816641 | orchestrator | 00:01:32.810 STDOUT terraform:  + public_key = (known after apply) 2025-09-06 00:01:32.816646 | orchestrator | 00:01:32.810 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816660 | orchestrator | 00:01:32.811 STDOUT terraform:  + user_id = (known after apply) 2025-09-06 00:01:32.816665 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-06 00:01:32.816670 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-06 00:01:32.816676 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816681 | orchestrator | 00:01:32.811 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816686 | orchestrator | 00:01:32.811 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816694 | orchestrator | 00:01:32.811 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816699 | orchestrator | 00:01:32.811 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816704 | orchestrator | 00:01:32.811 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816709 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-06 00:01:32.816714 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-06 00:01:32.816720 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816724 | orchestrator | 00:01:32.811 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816729 | orchestrator | 00:01:32.811 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816735 | orchestrator | 00:01:32.811 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816739 | orchestrator | 00:01:32.811 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816744 | orchestrator | 00:01:32.811 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816749 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-06 00:01:32.816754 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-06 00:01:32.816759 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816764 | orchestrator | 00:01:32.811 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816769 | orchestrator | 00:01:32.811 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816774 | orchestrator | 00:01:32.811 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816779 | orchestrator | 00:01:32.811 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816784 | orchestrator | 00:01:32.811 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816789 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-06 00:01:32.816794 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-06 00:01:32.816799 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816804 | orchestrator | 00:01:32.811 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816809 | orchestrator | 00:01:32.811 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816814 | orchestrator | 00:01:32.811 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816819 | orchestrator | 00:01:32.811 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816824 | orchestrator | 00:01:32.811 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816829 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-06 00:01:32.816834 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-06 00:01:32.816842 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816911 | orchestrator | 00:01:32.811 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816926 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816931 | orchestrator | 00:01:32.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816936 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816941 | orchestrator | 00:01:32.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816946 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-06 00:01:32.816951 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-06 00:01:32.816956 | orchestrator | 00:01:32.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816960 | orchestrator | 00:01:32.812 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.816965 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.816970 | orchestrator | 00:01:32.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.816975 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.816980 | orchestrator | 00:01:32.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.816984 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-06 00:01:32.816989 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-06 00:01:32.816994 | orchestrator | 00:01:32.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.816999 | orchestrator | 00:01:32.812 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.817004 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817009 | orchestrator | 00:01:32.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.817013 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817018 | orchestrator | 00:01:32.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.817023 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-06 00:01:32.817028 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-06 00:01:32.817033 | orchestrator | 00:01:32.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.817037 | orchestrator | 00:01:32.812 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.817042 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817047 | orchestrator | 00:01:32.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.817052 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817060 | orchestrator | 00:01:32.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.817065 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-06 00:01:32.817070 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-06 00:01:32.817075 | orchestrator | 00:01:32.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-06 00:01:32.817080 | orchestrator | 00:01:32.812 STDOUT terraform:  + device = (known after apply) 2025-09-06 00:01:32.817085 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817089 | orchestrator | 00:01:32.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-06 00:01:32.817094 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817099 | orchestrator | 00:01:32.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-06 00:01:32.817104 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-06 00:01:32.817114 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-06 00:01:32.817121 | orchestrator | 00:01:32.813 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-06 00:01:32.817126 | orchestrator | 00:01:32.813 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-06 00:01:32.817131 | orchestrator | 00:01:32.813 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-06 00:01:32.817135 | orchestrator | 00:01:32.813 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817140 | orchestrator | 00:01:32.813 STDOUT terraform:  + port_id = (known after apply) 2025-09-06 00:01:32.817145 | orchestrator | 00:01:32.813 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817150 | orchestrator | 00:01:32.813 STDOUT terraform:  } 2025-09-06 00:01:32.817155 | orchestrator | 00:01:32.813 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-06 00:01:32.817160 | orchestrator | 00:01:32.813 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-06 00:01:32.817165 | orchestrator | 00:01:32.813 STDOUT terraform:  + address = (known after apply) 2025-09-06 00:01:32.817170 | orchestrator | 00:01:32.813 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.817175 | orchestrator | 00:01:32.813 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-06 00:01:32.817179 | orchestrator | 00:01:32.813 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.817184 | orchestrator | 00:01:32.813 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-06 00:01:32.817189 | orchestrator | 00:01:32.813 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817194 | orchestrator | 00:01:32.813 STDOUT terraform:  + pool = "public" 2025-09-06 00:01:32.817199 | orchestrator | 00:01:32.813 STDOUT terraform:  + port_id = (known after apply) 2025-09-06 00:01:32.817203 | orchestrator | 00:01:32.813 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817208 | orchestrator | 00:01:32.813 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.817216 | orchestrator | 00:01:32.813 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.817221 | orchestrator | 00:01:32.813 STDOUT terraform:  } 2025-09-06 00:01:32.817226 | orchestrator | 00:01:32.813 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-06 00:01:32.817231 | orchestrator | 00:01:32.813 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-06 00:01:32.817236 | orchestrator | 00:01:32.813 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.817241 | orchestrator | 00:01:32.813 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.817246 | orchestrator | 00:01:32.813 STDOUT terraform:  + availability_zone_hints = [ 2025-09-06 00:01:32.817250 | orchestrator | 00:01:32.813 STDOUT terraform:  + "nova", 2025-09-06 00:01:32.817255 | orchestrator | 00:01:32.813 STDOUT terraform:  ] 2025-09-06 00:01:32.817260 | orchestrator | 00:01:32.813 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-06 00:01:32.817265 | orchestrator | 00:01:32.813 STDOUT terraform:  + external = (known after apply) 2025-09-06 00:01:32.817270 | orchestrator | 00:01:32.813 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817274 | orchestrator | 00:01:32.813 STDOUT terraform:  + mtu = (known after apply) 2025-09-06 00:01:32.817279 | orchestrator | 00:01:32.813 STDOUT terraform:  + name = "net-testbed-management" 2025-09-06 00:01:32.817284 | orchestrator | 00:01:32.813 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.817289 | orchestrator | 00:01:32.813 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.817294 | orchestrator | 00:01:32.813 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817302 | orchestrator | 00:01:32.813 STDOUT terraform:  + shared = (known after apply) 2025-09-06 00:01:32.817307 | orchestrator | 00:01:32.813 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.817311 | orchestrator | 00:01:32.814 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-06 00:01:32.817316 | orchestrator | 00:01:32.814 STDOUT terraform:  + segments (known after apply) 2025-09-06 00:01:32.817321 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-06 00:01:32.817326 | orchestrator | 00:01:32.814 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-06 00:01:32.817331 | orchestrator | 00:01:32.814 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-06 00:01:32.817336 | orchestrator | 00:01:32.814 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.817341 | orchestrator | 00:01:32.814 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.817345 | orchestrator | 00:01:32.814 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.817350 | orchestrator | 00:01:32.814 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.817358 | orchestrator | 00:01:32.814 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.817366 | orchestrator | 00:01:32.814 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.817371 | orchestrator | 00:01:32.814 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.817376 | orchestrator | 00:01:32.814 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.817381 | orchestrator | 00:01:32.814 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817386 | orchestrator | 00:01:32.814 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.817390 | orchestrator | 00:01:32.814 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.817395 | orchestrator | 00:01:32.814 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.817400 | orchestrator | 00:01:32.814 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.817405 | orchestrator | 00:01:32.814 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817410 | orchestrator | 00:01:32.814 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.817414 | orchestrator | 00:01:32.814 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.817419 | orchestrator | 00:01:32.814 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817424 | orchestrator | 00:01:32.814 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.817429 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-06 00:01:32.817434 | orchestrator | 00:01:32.814 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817439 | orchestrator | 00:01:32.814 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.817443 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-06 00:01:32.817448 | orchestrator | 00:01:32.814 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.817452 | orchestrator | 00:01:32.814 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.817457 | orchestrator | 00:01:32.814 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-06 00:01:32.817461 | orchestrator | 00:01:32.814 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.817466 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-06 00:01:32.817470 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-06 00:01:32.817475 | orchestrator | 00:01:32.814 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-06 00:01:32.817480 | orchestrator | 00:01:32.814 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.817487 | orchestrator | 00:01:32.814 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.817494 | orchestrator | 00:01:32.814 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.817499 | orchestrator | 00:01:32.814 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.817504 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.817508 | orchestrator | 00:01:32.815 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.817519 | orchestrator | 00:01:32.815 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.817524 | orchestrator | 00:01:32.815 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.817528 | orchestrator | 00:01:32.815 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.817533 | orchestrator | 00:01:32.815 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817537 | orchestrator | 00:01:32.815 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.817542 | orchestrator | 00:01:32.815 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.817547 | orchestrator | 00:01:32.815 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.817551 | orchestrator | 00:01:32.815 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.817556 | orchestrator | 00:01:32.815 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.817560 | orchestrator | 00:01:32.815 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.817565 | orchestrator | 00:01:32.815 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.817569 | orchestrator | 00:01:32.815 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817574 | orchestrator | 00:01:32.815 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.817579 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817612 | orchestrator | 00:01:32.815 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817617 | orchestrator | 00:01:32.815 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.817622 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817626 | orchestrator | 00:01:32.815 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817631 | orchestrator | 00:01:32.815 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.817636 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817640 | orchestrator | 00:01:32.815 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.817645 | orchestrator | 00:01:32.815 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.817649 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817654 | orchestrator | 00:01:32.815 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.817658 | orchestrator | 00:01:32.815 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.817663 | orchestrator | 00:01:32.815 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-06 00:01:32.817668 | orchestrator | 00:01:32.815 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.817672 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817677 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-06 00:01:32.817681 | orchestrator | 00:01:32.815 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-06 00:01:32.817686 | orchestrator | 00:01:32.815 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.817694 | orchestrator | 00:01:32.815 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.817699 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.817703 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.817710 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.817715 | orchestrator | 00:01:32.815 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.817720 | orchestrator | 00:01:32.815 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.817724 | orchestrator | 00:01:32.816 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.817729 | orchestrator | 00:01:32.816 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.817733 | orchestrator | 00:01:32.816 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.817738 | orchestrator | 00:01:32.816 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.823192 | orchestrator | 00:01:32.816 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.834317 | orchestrator | 00:01:32.833 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.834514 | orchestrator | 00:01:32.834 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.834565 | orchestrator | 00:01:32.834 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.834580 | orchestrator | 00:01:32.834 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.834626 | orchestrator | 00:01:32.834 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.834641 | orchestrator | 00:01:32.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.834692 | orchestrator | 00:01:32.834 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.834706 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.834721 | orchestrator | 00:01:32.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.834731 | orchestrator | 00:01:32.834 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.834744 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.834754 | orchestrator | 00:01:32.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.834879 | orchestrator | 00:01:32.834 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.834892 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.834902 | orchestrator | 00:01:32.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.834930 | orchestrator | 00:01:32.834 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.834940 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.834950 | orchestrator | 00:01:32.834 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.834964 | orchestrator | 00:01:32.834 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.834993 | orchestrator | 00:01:32.834 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-06 00:01:32.835003 | orchestrator | 00:01:32.834 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.835013 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.835023 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-06 00:01:32.835036 | orchestrator | 00:01:32.834 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-06 00:01:32.835048 | orchestrator | 00:01:32.834 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.835061 | orchestrator | 00:01:32.835 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.835263 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.835285 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.835295 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.835322 | orchestrator | 00:01:32.835 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.835332 | orchestrator | 00:01:32.835 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.835347 | orchestrator | 00:01:32.835 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.835360 | orchestrator | 00:01:32.835 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.835370 | orchestrator | 00:01:32.835 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.835380 | orchestrator | 00:01:32.835 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.835393 | orchestrator | 00:01:32.835 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.835406 | orchestrator | 00:01:32.835 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.835578 | orchestrator | 00:01:32.835 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.835589 | orchestrator | 00:01:32.835 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.835599 | orchestrator | 00:01:32.835 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.835609 | orchestrator | 00:01:32.835 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.835619 | orchestrator | 00:01:32.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.835646 | orchestrator | 00:01:32.835 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.835656 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.835669 | orchestrator | 00:01:32.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.835680 | orchestrator | 00:01:32.835 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.835689 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.835699 | orchestrator | 00:01:32.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.835709 | orchestrator | 00:01:32.835 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.835726 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.835740 | orchestrator | 00:01:32.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.835749 | orchestrator | 00:01:32.835 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.835759 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.835770 | orchestrator | 00:01:32.835 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.835783 | orchestrator | 00:01:32.835 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.835793 | orchestrator | 00:01:32.835 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-06 00:01:32.835806 | orchestrator | 00:01:32.835 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.835816 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.835829 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-06 00:01:32.836085 | orchestrator | 00:01:32.835 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-06 00:01:32.836097 | orchestrator | 00:01:32.835 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.836105 | orchestrator | 00:01:32.835 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.836114 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.836122 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.836130 | orchestrator | 00:01:32.836 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.836153 | orchestrator | 00:01:32.836 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.836164 | orchestrator | 00:01:32.836 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.836173 | orchestrator | 00:01:32.836 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.836183 | orchestrator | 00:01:32.836 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.836320 | orchestrator | 00:01:32.836 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.836330 | orchestrator | 00:01:32.836 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.836338 | orchestrator | 00:01:32.836 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.836346 | orchestrator | 00:01:32.836 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.836357 | orchestrator | 00:01:32.836 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.836379 | orchestrator | 00:01:32.836 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.836390 | orchestrator | 00:01:32.836 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.836501 | orchestrator | 00:01:32.836 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.836511 | orchestrator | 00:01:32.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.836519 | orchestrator | 00:01:32.836 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.836533 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836541 | orchestrator | 00:01:32.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.836566 | orchestrator | 00:01:32.836 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.836575 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836583 | orchestrator | 00:01:32.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.836591 | orchestrator | 00:01:32.836 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.836599 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836610 | orchestrator | 00:01:32.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.836618 | orchestrator | 00:01:32.836 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.836626 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836636 | orchestrator | 00:01:32.836 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.836645 | orchestrator | 00:01:32.836 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.836656 | orchestrator | 00:01:32.836 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-06 00:01:32.836832 | orchestrator | 00:01:32.836 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.836861 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836870 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-06 00:01:32.836892 | orchestrator | 00:01:32.836 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-06 00:01:32.836901 | orchestrator | 00:01:32.836 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.836909 | orchestrator | 00:01:32.836 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.836921 | orchestrator | 00:01:32.836 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.836929 | orchestrator | 00:01:32.836 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.836937 | orchestrator | 00:01:32.836 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.836948 | orchestrator | 00:01:32.836 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.837096 | orchestrator | 00:01:32.836 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.837106 | orchestrator | 00:01:32.836 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.837114 | orchestrator | 00:01:32.837 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.837122 | orchestrator | 00:01:32.837 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.837132 | orchestrator | 00:01:32.837 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.837141 | orchestrator | 00:01:32.837 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.837165 | orchestrator | 00:01:32.837 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.837288 | orchestrator | 00:01:32.837 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.837297 | orchestrator | 00:01:32.837 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.837305 | orchestrator | 00:01:32.837 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.837313 | orchestrator | 00:01:32.837 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.837324 | orchestrator | 00:01:32.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.837332 | orchestrator | 00:01:32.837 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.837357 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837365 | orchestrator | 00:01:32.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.837376 | orchestrator | 00:01:32.837 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.837384 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837395 | orchestrator | 00:01:32.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.837528 | orchestrator | 00:01:32.837 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.837537 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837546 | orchestrator | 00:01:32.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.837554 | orchestrator | 00:01:32.837 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.837562 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837570 | orchestrator | 00:01:32.837 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.837591 | orchestrator | 00:01:32.837 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.837599 | orchestrator | 00:01:32.837 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-06 00:01:32.837610 | orchestrator | 00:01:32.837 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.837619 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837627 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-06 00:01:32.837635 | orchestrator | 00:01:32.837 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-06 00:01:32.837645 | orchestrator | 00:01:32.837 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-06 00:01:32.837792 | orchestrator | 00:01:32.837 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.837802 | orchestrator | 00:01:32.837 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-06 00:01:32.837810 | orchestrator | 00:01:32.837 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-06 00:01:32.837818 | orchestrator | 00:01:32.837 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.837828 | orchestrator | 00:01:32.837 STDOUT terraform:  + device_id = (known after apply) 2025-09-06 00:01:32.837836 | orchestrator | 00:01:32.837 STDOUT terraform:  + device_owner = (known after apply) 2025-09-06 00:01:32.837876 | orchestrator | 00:01:32.837 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-06 00:01:32.837898 | orchestrator | 00:01:32.837 STDOUT terraform:  + dns_name = (known after apply) 2025-09-06 00:01:32.838025 | orchestrator | 00:01:32.837 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.838057 | orchestrator | 00:01:32.837 STDOUT terraform:  + mac_address = (known after apply) 2025-09-06 00:01:32.838065 | orchestrator | 00:01:32.837 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.838275 | orchestrator | 00:01:32.837 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-06 00:01:32.838637 | orchestrator | 00:01:32.838 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-06 00:01:32.839066 | orchestrator | 00:01:32.838 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.839793 | orchestrator | 00:01:32.839 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-06 00:01:32.841479 | orchestrator | 00:01:32.839 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.841504 | orchestrator | 00:01:32.840 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.841512 | orchestrator | 00:01:32.840 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-06 00:01:32.841520 | orchestrator | 00:01:32.840 STDOUT terraform:  } 2025-09-06 00:01:32.841527 | orchestrator | 00:01:32.840 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.841533 | orchestrator | 00:01:32.840 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-06 00:01:32.841540 | orchestrator | 00:01:32.841 STDOUT terraform:  } 2025-09-06 00:01:32.841547 | orchestrator | 00:01:32.841 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.841634 | orchestrator | 00:01:32.841 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-06 00:01:32.841705 | orchestrator | 00:01:32.841 STDOUT terraform:  } 2025-09-06 00:01:32.841820 | orchestrator | 00:01:32.841 STDOUT terraform:  + allowed_address_pairs { 2025-09-06 00:01:32.842062 | orchestrator | 00:01:32.841 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-06 00:01:32.842419 | orchestrator | 00:01:32.842 STDOUT terraform:  } 2025-09-06 00:01:32.842755 | orchestrator | 00:01:32.842 STDOUT terraform:  + binding (known after apply) 2025-09-06 00:01:32.842804 | orchestrator | 00:01:32.842 STDOUT terraform:  + fixed_ip { 2025-09-06 00:01:32.848198 | orchestrator | 00:01:32.842 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-06 00:01:32.848239 | orchestrator | 00:01:32.848 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.848248 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-06 00:01:32.848255 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-06 00:01:32.848276 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-06 00:01:32.848322 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-06 00:01:32.848344 | orchestrator | 00:01:32.848 STDOUT terraform:  + force_destroy = false 2025-09-06 00:01:32.848373 | orchestrator | 00:01:32.848 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.848402 | orchestrator | 00:01:32.848 STDOUT terraform:  + port_id = (known after apply) 2025-09-06 00:01:32.848430 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.848459 | orchestrator | 00:01:32.848 STDOUT terraform:  + router_id = (known after apply) 2025-09-06 00:01:32.848488 | orchestrator | 00:01:32.848 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-06 00:01:32.848497 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-06 00:01:32.848532 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-06 00:01:32.848567 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-06 00:01:32.848603 | orchestrator | 00:01:32.848 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-06 00:01:32.848639 | orchestrator | 00:01:32.848 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.848660 | orchestrator | 00:01:32.848 STDOUT terraform:  + availability_zone_hints = [ 2025-09-06 00:01:32.848670 | orchestrator | 00:01:32.848 STDOUT terraform:  + "nova", 2025-09-06 00:01:32.848678 | orchestrator | 00:01:32.848 STDOUT terraform:  ] 2025-09-06 00:01:32.848717 | orchestrator | 00:01:32.848 STDOUT terraform:  + distributed = (known after apply) 2025-09-06 00:01:32.848752 | orchestrator | 00:01:32.848 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-06 00:01:32.848801 | orchestrator | 00:01:32.848 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-06 00:01:32.848839 | orchestrator | 00:01:32.848 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-06 00:01:32.848889 | orchestrator | 00:01:32.848 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.848918 | orchestrator | 00:01:32.848 STDOUT terraform:  + name = "testbed" 2025-09-06 00:01:32.848953 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.848990 | orchestrator | 00:01:32.848 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.849018 | orchestrator | 00:01:32.848 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-06 00:01:32.849027 | orchestrator | 00:01:32.849 STDOUT terraform:  } 2025-09-06 00:01:32.849080 | orchestrator | 00:01:32.849 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-06 00:01:32.849131 | orchestrator | 00:01:32.849 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-06 00:01:32.849154 | orchestrator | 00:01:32.849 STDOUT terraform:  + description = "ssh" 2025-09-06 00:01:32.849184 | orchestrator | 00:01:32.849 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.849209 | orchestrator | 00:01:32.849 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.849245 | orchestrator | 00:01:32.849 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.849267 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_range_max = 22 2025-09-06 00:01:32.849290 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_range_min = 22 2025-09-06 00:01:32.849308 | orchestrator | 00:01:32.849 STDOUT terraform:  + protocol = "tcp" 2025-09-06 00:01:32.849346 | orchestrator | 00:01:32.849 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.849380 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.849414 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.849443 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.849477 | orchestrator | 00:01:32.849 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.849513 | orchestrator | 00:01:32.849 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.849522 | orchestrator | 00:01:32.849 STDOUT terraform:  } 2025-09-06 00:01:32.849574 | orchestrator | 00:01:32.849 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-06 00:01:32.849625 | orchestrator | 00:01:32.849 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-06 00:01:32.849653 | orchestrator | 00:01:32.849 STDOUT terraform:  + description = "wireguard" 2025-09-06 00:01:32.849681 | orchestrator | 00:01:32.849 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.849708 | orchestrator | 00:01:32.849 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.849744 | orchestrator | 00:01:32.849 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.849767 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_range_max = 51820 2025-09-06 00:01:32.849791 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_range_min = 51820 2025-09-06 00:01:32.849814 | orchestrator | 00:01:32.849 STDOUT terraform:  + protocol = "udp" 2025-09-06 00:01:32.849887 | orchestrator | 00:01:32.849 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.849898 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.849924 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.849951 | orchestrator | 00:01:32.849 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.849990 | orchestrator | 00:01:32.849 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.850041 | orchestrator | 00:01:32.849 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.850050 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-06 00:01:32.850104 | orchestrator | 00:01:32.850 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-06 00:01:32.850155 | orchestrator | 00:01:32.850 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-06 00:01:32.850183 | orchestrator | 00:01:32.850 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.850207 | orchestrator | 00:01:32.850 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.850243 | orchestrator | 00:01:32.850 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.850269 | orchestrator | 00:01:32.850 STDOUT terraform:  + protocol = "tcp" 2025-09-06 00:01:32.850304 | orchestrator | 00:01:32.850 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.850340 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.850376 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.850412 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-06 00:01:32.850448 | orchestrator | 00:01:32.850 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.850484 | orchestrator | 00:01:32.850 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.850492 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-06 00:01:32.850545 | orchestrator | 00:01:32.850 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-06 00:01:32.850596 | orchestrator | 00:01:32.850 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-06 00:01:32.850624 | orchestrator | 00:01:32.850 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.850649 | orchestrator | 00:01:32.850 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.850684 | orchestrator | 00:01:32.850 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.850709 | orchestrator | 00:01:32.850 STDOUT terraform:  + protocol = "udp" 2025-09-06 00:01:32.850747 | orchestrator | 00:01:32.850 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.850780 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.850815 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.850860 | orchestrator | 00:01:32.850 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-06 00:01:32.850893 | orchestrator | 00:01:32.850 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.850929 | orchestrator | 00:01:32.850 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.850937 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-06 00:01:32.850988 | orchestrator | 00:01:32.850 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-06 00:01:32.851041 | orchestrator | 00:01:32.850 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-06 00:01:32.851069 | orchestrator | 00:01:32.851 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.851093 | orchestrator | 00:01:32.851 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.851130 | orchestrator | 00:01:32.851 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.851154 | orchestrator | 00:01:32.851 STDOUT terraform:  + protocol = "icmp" 2025-09-06 00:01:32.851189 | orchestrator | 00:01:32.851 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.851225 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.851259 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.851290 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.851348 | orchestrator | 00:01:32.851 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.851357 | orchestrator | 00:01:32.851 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.851364 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-06 00:01:32.851414 | orchestrator | 00:01:32.851 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-06 00:01:32.851465 | orchestrator | 00:01:32.851 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-06 00:01:32.851493 | orchestrator | 00:01:32.851 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.851518 | orchestrator | 00:01:32.851 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.851554 | orchestrator | 00:01:32.851 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.851578 | orchestrator | 00:01:32.851 STDOUT terraform:  + protocol = "tcp" 2025-09-06 00:01:32.851613 | orchestrator | 00:01:32.851 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.851647 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.851682 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.851710 | orchestrator | 00:01:32.851 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.851744 | orchestrator | 00:01:32.851 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.851779 | orchestrator | 00:01:32.851 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.851786 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-06 00:01:32.851838 | orchestrator | 00:01:32.851 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-06 00:01:32.851905 | orchestrator | 00:01:32.851 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-06 00:01:32.851931 | orchestrator | 00:01:32.851 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.851955 | orchestrator | 00:01:32.851 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.851991 | orchestrator | 00:01:32.851 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.852016 | orchestrator | 00:01:32.851 STDOUT terraform:  + protocol = "udp" 2025-09-06 00:01:32.852052 | orchestrator | 00:01:32.852 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.852085 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.852120 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.852156 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.852191 | orchestrator | 00:01:32.852 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.852227 | orchestrator | 00:01:32.852 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.852235 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-06 00:01:32.852285 | orchestrator | 00:01:32.852 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-06 00:01:32.852334 | orchestrator | 00:01:32.852 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-06 00:01:32.852361 | orchestrator | 00:01:32.852 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.852388 | orchestrator | 00:01:32.852 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.852420 | orchestrator | 00:01:32.852 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.852444 | orchestrator | 00:01:32.852 STDOUT terraform:  + protocol = "icmp" 2025-09-06 00:01:32.852480 | orchestrator | 00:01:32.852 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.852515 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.852550 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.852578 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.852613 | orchestrator | 00:01:32.852 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.852648 | orchestrator | 00:01:32.852 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.852656 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-06 00:01:32.852704 | orchestrator | 00:01:32.852 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-06 00:01:32.852757 | orchestrator | 00:01:32.852 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-06 00:01:32.852775 | orchestrator | 00:01:32.852 STDOUT terraform:  + description = "vrrp" 2025-09-06 00:01:32.852803 | orchestrator | 00:01:32.852 STDOUT terraform:  + direction = "ingress" 2025-09-06 00:01:32.852828 | orchestrator | 00:01:32.852 STDOUT terraform:  + ethertype = "IPv4" 2025-09-06 00:01:32.852876 | orchestrator | 00:01:32.852 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.852899 | orchestrator | 00:01:32.852 STDOUT terraform:  + protocol = "112" 2025-09-06 00:01:32.852935 | orchestrator | 00:01:32.852 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.852970 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-06 00:01:32.853004 | orchestrator | 00:01:32.852 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-06 00:01:32.853032 | orchestrator | 00:01:32.853 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-06 00:01:32.853068 | orchestrator | 00:01:32.853 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-06 00:01:32.853103 | orchestrator | 00:01:32.853 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.853111 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-06 00:01:32.853160 | orchestrator | 00:01:32.853 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-06 00:01:32.853209 | orchestrator | 00:01:32.853 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-06 00:01:32.853235 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.853268 | orchestrator | 00:01:32.853 STDOUT terraform:  + description = "management security group" 2025-09-06 00:01:32.853295 | orchestrator | 00:01:32.853 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.853323 | orchestrator | 00:01:32.853 STDOUT terraform:  + name = "testbed-management" 2025-09-06 00:01:32.853351 | orchestrator | 00:01:32.853 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.853377 | orchestrator | 00:01:32.853 STDOUT terraform:  + stateful = (known after apply) 2025-09-06 00:01:32.853405 | orchestrator | 00:01:32.853 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.853412 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-06 00:01:32.853458 | orchestrator | 00:01:32.853 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-06 00:01:32.853503 | orchestrator | 00:01:32.853 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-06 00:01:32.853533 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.853562 | orchestrator | 00:01:32.853 STDOUT terraform:  + description = "node security group" 2025-09-06 00:01:32.853585 | orchestrator | 00:01:32.853 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.853607 | orchestrator | 00:01:32.853 STDOUT terraform:  + name = "testbed-node" 2025-09-06 00:01:32.853634 | orchestrator | 00:01:32.853 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.853661 | orchestrator | 00:01:32.853 STDOUT terraform:  + stateful = (known after apply) 2025-09-06 00:01:32.853689 | orchestrator | 00:01:32.853 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.853697 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-06 00:01:32.853740 | orchestrator | 00:01:32.853 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-06 00:01:32.853783 | orchestrator | 00:01:32.853 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-06 00:01:32.853813 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_tags = (known after apply) 2025-09-06 00:01:32.853845 | orchestrator | 00:01:32.853 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-06 00:01:32.853893 | orchestrator | 00:01:32.853 STDOUT terraform:  + dns_nameservers = [ 2025-09-06 00:01:32.853901 | orchestrator | 00:01:32.853 STDOUT terraform:  + "8.8.8.8", 2025-09-06 00:01:32.853908 | orchestrator | 00:01:32.853 STDOUT terraform:  + "9.9.9.9", 2025-09-06 00:01:32.853925 | orchestrator | 00:01:32.853 STDOUT terraform:  ] 2025-09-06 00:01:32.853933 | orchestrator | 00:01:32.853 STDOUT terraform:  + enable_dhcp = true 2025-09-06 00:01:32.853942 | orchestrator | 00:01:32.853 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-06 00:01:32.853968 | orchestrator | 00:01:32.853 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.853985 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_version = 4 2025-09-06 00:01:32.854033 | orchestrator | 00:01:32.853 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-06 00:01:32.854473 | orchestrator | 00:01:32.854 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-06 00:01:32.855011 | orchestrator | 00:01:32.854 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-06 00:01:32.855475 | orchestrator | 00:01:32.855 STDOUT terraform:  + network_id = (known after apply) 2025-09-06 00:01:32.855754 | orchestrator | 00:01:32.855 STDOUT terraform:  + no_gateway = false 2025-09-06 00:01:32.856297 | orchestrator | 00:01:32.855 STDOUT terraform:  + region = (known after apply) 2025-09-06 00:01:32.856801 | orchestrator | 00:01:32.856 STDOUT terraform:  + service_types = (known after apply) 2025-09-06 00:01:32.857743 | orchestrator | 00:01:32.856 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-06 00:01:32.857938 | orchestrator | 00:01:32.857 STDOUT terraform:  + allocation_pool { 2025-09-06 00:01:32.858241 | orchestrator | 00:01:32.857 STDOUT terraform:  + end = "192.168.31.250" 2025-09-06 00:01:32.858767 | orchestrator | 00:01:32.858 STDOUT terraform:  + start = "192.168.31.200" 2025-09-06 00:01:32.859055 | orchestrator | 00:01:32.858 STDOUT terraform:  } 2025-09-06 00:01:32.859278 | orchestrator | 00:01:32.859 STDOUT terraform:  } 2025-09-06 00:01:32.859513 | orchestrator | 00:01:32.859 STDOUT terraform:  # terraform_data.image will be created 2025-09-06 00:01:32.859846 | orchestrator | 00:01:32.859 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-06 00:01:32.859986 | orchestrator | 00:01:32.859 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.860268 | orchestrator | 00:01:32.860 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-06 00:01:32.860442 | orchestrator | 00:01:32.860 STDOUT terraform:  + output = (known after apply) 2025-09-06 00:01:32.866218 | orchestrator | 00:01:32.860 STDOUT terraform:  } 2025-09-06 00:01:32.866243 | orchestrator | 00:01:32.863 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-06 00:01:32.866248 | orchestrator | 00:01:32.863 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-06 00:01:32.866252 | orchestrator | 00:01:32.863 STDOUT terraform:  + id = (known after apply) 2025-09-06 00:01:32.866256 | orchestrator | 00:01:32.863 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-06 00:01:32.866260 | orchestrator | 00:01:32.863 STDOUT terraform:  + output = (known after apply) 2025-09-06 00:01:32.866264 | orchestrator | 00:01:32.863 STDOUT terraform:  } 2025-09-06 00:01:32.866269 | orchestrator | 00:01:32.863 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-06 00:01:32.866273 | orchestrator | 00:01:32.863 STDOUT terraform: Changes to Outputs: 2025-09-06 00:01:32.866285 | orchestrator | 00:01:32.863 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-06 00:01:32.866290 | orchestrator | 00:01:32.863 STDOUT terraform:  + private_key = (sensitive value) 2025-09-06 00:01:33.014831 | orchestrator | 00:01:33.014 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-06 00:01:33.015037 | orchestrator | 00:01:33.014 STDOUT terraform: terraform_data.image: Creating... 2025-09-06 00:01:33.015502 | orchestrator | 00:01:33.015 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=c436f7e4-bcae-42fd-dd27-1b9703573098] 2025-09-06 00:01:33.015927 | orchestrator | 00:01:33.015 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=91d1705d-5247-e651-bf0a-ff42d5bd15c6] 2025-09-06 00:01:33.032513 | orchestrator | 00:01:33.031 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-06 00:01:33.033897 | orchestrator | 00:01:33.033 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-06 00:01:33.047369 | orchestrator | 00:01:33.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-06 00:01:33.048188 | orchestrator | 00:01:33.047 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-06 00:01:33.054082 | orchestrator | 00:01:33.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-06 00:01:33.054131 | orchestrator | 00:01:33.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-06 00:01:33.054136 | orchestrator | 00:01:33.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-06 00:01:33.057539 | orchestrator | 00:01:33.054 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-06 00:01:33.072216 | orchestrator | 00:01:33.072 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-06 00:01:33.078401 | orchestrator | 00:01:33.078 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-06 00:01:33.547717 | orchestrator | 00:01:33.547 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-06 00:01:33.552734 | orchestrator | 00:01:33.552 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-06 00:01:33.619474 | orchestrator | 00:01:33.617 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-06 00:01:33.624488 | orchestrator | 00:01:33.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-06 00:01:33.837139 | orchestrator | 00:01:33.836 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-06 00:01:33.844101 | orchestrator | 00:01:33.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-06 00:01:34.161750 | orchestrator | 00:01:34.161 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=2e471f35-8999-424b-8e48-f8f80c058a7d] 2025-09-06 00:01:34.166144 | orchestrator | 00:01:34.165 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-06 00:01:36.705497 | orchestrator | 00:01:36.705 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ff2df27d-11ce-481a-9d5b-51960fd8aeff] 2025-09-06 00:01:36.709638 | orchestrator | 00:01:36.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=ff2245c5-2416-47aa-a035-68e781151c74] 2025-09-06 00:01:36.719601 | orchestrator | 00:01:36.719 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-06 00:01:36.721698 | orchestrator | 00:01:36.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=8fcef200-ddbb-407c-9fba-bf8a684fde8b] 2025-09-06 00:01:36.723310 | orchestrator | 00:01:36.723 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-06 00:01:36.727355 | orchestrator | 00:01:36.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-06 00:01:36.735070 | orchestrator | 00:01:36.734 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=f5461442ab697c462304c656dcbe0e5ce8fbd47b] 2025-09-06 00:01:36.736567 | orchestrator | 00:01:36.736 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=a042a858dbf5c03018b3c77a37c38f8f1faa901f] 2025-09-06 00:01:36.743664 | orchestrator | 00:01:36.743 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-06 00:01:36.745022 | orchestrator | 00:01:36.744 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-06 00:01:36.751152 | orchestrator | 00:01:36.750 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=a6f67441-1efd-42d1-ae3b-c249d4af45c4] 2025-09-06 00:01:36.754865 | orchestrator | 00:01:36.754 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=59e1d33e-4f47-4176-9d4f-6bd749639634] 2025-09-06 00:01:36.757243 | orchestrator | 00:01:36.757 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-06 00:01:36.758802 | orchestrator | 00:01:36.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-06 00:01:36.762110 | orchestrator | 00:01:36.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=4b95c2e9-50f3-4582-afe8-fe749e38f7c5] 2025-09-06 00:01:36.781312 | orchestrator | 00:01:36.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-06 00:01:36.813873 | orchestrator | 00:01:36.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=60cce0b1-ac13-42c3-8474-28bd0504aaba] 2025-09-06 00:01:36.823872 | orchestrator | 00:01:36.823 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-06 00:01:36.860037 | orchestrator | 00:01:36.859 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=8526d803-93b6-4435-afbc-8fa992e96ed7] 2025-09-06 00:01:37.053539 | orchestrator | 00:01:37.053 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=25619c3a-8da8-43cb-a754-e63f9339b6a8] 2025-09-06 00:01:37.691418 | orchestrator | 00:01:37.691 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=65058a2d-9ed5-42e1-82bc-98fd0c963167] 2025-09-06 00:01:37.768717 | orchestrator | 00:01:37.768 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=4ff13172-570c-4b96-8ff2-cec8e371f908] 2025-09-06 00:01:37.777480 | orchestrator | 00:01:37.777 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-06 00:01:40.164819 | orchestrator | 00:01:40.164 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=40122d3f-0139-48c0-a1ea-e85093653425] 2025-09-06 00:01:40.190757 | orchestrator | 00:01:40.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=078fca39-f411-429e-9193-aac97937ed20] 2025-09-06 00:01:40.219150 | orchestrator | 00:01:40.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=f18475ff-2e12-4c30-992f-77f53bec54c1] 2025-09-06 00:01:40.520421 | orchestrator | 00:01:40.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ddf87fb6-4780-4596-86d4-c5a6d6af40b9] 2025-09-06 00:01:40.581291 | orchestrator | 00:01:40.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=16b07e0f-1842-4d83-ac35-c8852fb0b626] 2025-09-06 00:01:40.587577 | orchestrator | 00:01:40.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=0c3f1420-3954-4bd3-a390-a5ebcf190ecf] 2025-09-06 00:01:41.507140 | orchestrator | 00:01:41.506 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=3e11b460-f008-48a1-8414-8f931e8f118f] 2025-09-06 00:01:41.515091 | orchestrator | 00:01:41.514 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-06 00:01:41.516371 | orchestrator | 00:01:41.516 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-06 00:01:41.517018 | orchestrator | 00:01:41.516 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-06 00:01:41.733450 | orchestrator | 00:01:41.733 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=729af3e4-a788-43a5-99e8-87d9ab30b5eb] 2025-09-06 00:01:41.750090 | orchestrator | 00:01:41.749 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-06 00:01:41.750165 | orchestrator | 00:01:41.749 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-06 00:01:41.751165 | orchestrator | 00:01:41.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-06 00:01:41.751651 | orchestrator | 00:01:41.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-06 00:01:41.753972 | orchestrator | 00:01:41.753 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-06 00:01:41.757048 | orchestrator | 00:01:41.756 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-06 00:01:41.796354 | orchestrator | 00:01:41.796 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=02ade87a-4c20-4295-82bd-95a2ed321801] 2025-09-06 00:01:41.803052 | orchestrator | 00:01:41.802 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-06 00:01:41.803356 | orchestrator | 00:01:41.803 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-06 00:01:41.811811 | orchestrator | 00:01:41.811 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-06 00:01:41.984245 | orchestrator | 00:01:41.983 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=aa27e147-0ef1-4251-b18f-1de9689adc26] 2025-09-06 00:01:41.991574 | orchestrator | 00:01:41.991 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-06 00:01:42.001252 | orchestrator | 00:01:42.000 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=da22d51c-3c5b-4b98-89c1-ec504288a00e] 2025-09-06 00:01:42.008682 | orchestrator | 00:01:42.008 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-06 00:01:42.156224 | orchestrator | 00:01:42.155 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=c77ca52e-2d62-4cdd-b57d-a4ccd167bd44] 2025-09-06 00:01:42.174735 | orchestrator | 00:01:42.174 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-06 00:01:42.176446 | orchestrator | 00:01:42.176 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=fa7fcf99-8011-4a70-9e5d-d102c02e235b] 2025-09-06 00:01:42.191756 | orchestrator | 00:01:42.191 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-06 00:01:42.336145 | orchestrator | 00:01:42.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=8b578725-c44e-4496-bc8f-4b5c8e6bec15] 2025-09-06 00:01:42.349646 | orchestrator | 00:01:42.349 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-06 00:01:42.543475 | orchestrator | 00:01:42.543 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8830ad30-0fd2-47f8-9252-2342cd79cc50] 2025-09-06 00:01:42.557339 | orchestrator | 00:01:42.557 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-06 00:01:42.694635 | orchestrator | 00:01:42.694 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=fbf932c3-eac6-40d6-a79d-5d3825925213] 2025-09-06 00:01:42.701782 | orchestrator | 00:01:42.701 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-06 00:01:42.714372 | orchestrator | 00:01:42.714 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=d65be45a-616d-41a9-9283-76e03366a604] 2025-09-06 00:01:43.058600 | orchestrator | 00:01:43.058 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=2003370f-a38c-4aac-9944-25b79e331e6b] 2025-09-06 00:01:43.342937 | orchestrator | 00:01:43.342 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=bb94ced5-6333-4607-a1da-7015ca5a2f2e] 2025-09-06 00:01:43.376364 | orchestrator | 00:01:43.376 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=6b371595-6d8a-4022-84d7-4e6711e08431] 2025-09-06 00:01:43.552757 | orchestrator | 00:01:43.552 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=33a5d9a6-879d-45ce-8028-99b05c8c725d] 2025-09-06 00:01:43.678362 | orchestrator | 00:01:43.677 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=aa4f08fa-a974-42ec-8d17-4054f8c5e250] 2025-09-06 00:01:43.840155 | orchestrator | 00:01:43.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=f45b4b2d-b575-4e13-9a69-ea2c0ab93760] 2025-09-06 00:01:43.894123 | orchestrator | 00:01:43.893 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=ba6c6c74-0301-4ad4-a832-184e93b57e9d] 2025-09-06 00:01:44.167029 | orchestrator | 00:01:44.166 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=d0a36037-87ee-48ea-b088-c386494650a9] 2025-09-06 00:01:45.458656 | orchestrator | 00:01:45.458 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=52e9a37f-7ae5-48ad-afc4-76583bdaadb4] 2025-09-06 00:01:45.477396 | orchestrator | 00:01:45.477 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-06 00:01:45.490558 | orchestrator | 00:01:45.490 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-06 00:01:45.492846 | orchestrator | 00:01:45.492 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-06 00:01:45.498869 | orchestrator | 00:01:45.498 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-06 00:01:45.503896 | orchestrator | 00:01:45.503 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-06 00:01:45.508638 | orchestrator | 00:01:45.508 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-06 00:01:45.511712 | orchestrator | 00:01:45.511 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-06 00:01:47.425899 | orchestrator | 00:01:47.424 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=b11301bd-0f18-47fc-848d-a80de84a35c3] 2025-09-06 00:01:47.436006 | orchestrator | 00:01:47.435 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-06 00:01:47.439506 | orchestrator | 00:01:47.439 STDOUT terraform: local_file.inventory: Creating... 2025-09-06 00:01:47.440228 | orchestrator | 00:01:47.440 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-06 00:01:47.444625 | orchestrator | 00:01:47.444 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=89b83bf55605f9b46278548546c1419d015dcc9b] 2025-09-06 00:01:47.447268 | orchestrator | 00:01:47.447 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f1775328e1ef4f9e35e8b8cc7608cb00eb6b8c68] 2025-09-06 00:01:48.330163 | orchestrator | 00:01:48.329 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b11301bd-0f18-47fc-848d-a80de84a35c3] 2025-09-06 00:01:55.494710 | orchestrator | 00:01:55.494 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-06 00:01:55.494847 | orchestrator | 00:01:55.494 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-06 00:01:55.502776 | orchestrator | 00:01:55.502 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [11s elapsed] 2025-09-06 00:01:55.507916 | orchestrator | 00:01:55.507 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-06 00:01:55.513172 | orchestrator | 00:01:55.512 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-06 00:01:55.513292 | orchestrator | 00:01:55.513 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-06 00:02:05.495076 | orchestrator | 00:02:05.494 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-06 00:02:05.495225 | orchestrator | 00:02:05.494 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-06 00:02:05.503203 | orchestrator | 00:02:05.502 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [21s elapsed] 2025-09-06 00:02:05.508465 | orchestrator | 00:02:05.508 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-06 00:02:05.513723 | orchestrator | 00:02:05.513 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-06 00:02:05.513876 | orchestrator | 00:02:05.513 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-06 00:02:15.499051 | orchestrator | 00:02:15.498 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-06 00:02:15.499945 | orchestrator | 00:02:15.498 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-06 00:02:15.504134 | orchestrator | 00:02:15.503 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [31s elapsed] 2025-09-06 00:02:15.509371 | orchestrator | 00:02:15.509 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-06 00:02:15.514859 | orchestrator | 00:02:15.514 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-06 00:02:15.515018 | orchestrator | 00:02:15.514 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-06 00:02:16.321436 | orchestrator | 00:02:16.321 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=cab0b8cf-1a97-48ad-b30d-4cb41f648905] 2025-09-06 00:02:16.523472 | orchestrator | 00:02:16.523 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=b3897c40-59d8-424c-8e5f-802dde31f671] 2025-09-06 00:02:16.532309 | orchestrator | 00:02:16.532 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=68d9bca1-a6ba-43fd-9ad2-ffd55409eff8] 2025-09-06 00:02:25.503498 | orchestrator | 00:02:25.503 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [41s elapsed] 2025-09-06 00:02:25.509540 | orchestrator | 00:02:25.509 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-09-06 00:02:25.515739 | orchestrator | 00:02:25.515 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-09-06 00:02:26.739322 | orchestrator | 00:02:26.738 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=21fba27d-b0af-4319-97e2-afe8f72e5324] 2025-09-06 00:02:26.789326 | orchestrator | 00:02:26.788 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=68576aeb-f076-4b92-97d4-d2b2381e8c90] 2025-09-06 00:02:27.042868 | orchestrator | 00:02:27.042 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 42s [id=80ed06f6-d004-44e4-8367-a4c9cdaf7d82] 2025-09-06 00:02:27.063571 | orchestrator | 00:02:27.063 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-06 00:02:27.065471 | orchestrator | 00:02:27.065 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5315975982391969338] 2025-09-06 00:02:27.065547 | orchestrator | 00:02:27.065 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-06 00:02:27.068864 | orchestrator | 00:02:27.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-06 00:02:27.076409 | orchestrator | 00:02:27.076 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-06 00:02:27.084230 | orchestrator | 00:02:27.084 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-06 00:02:27.087591 | orchestrator | 00:02:27.087 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-06 00:02:27.088154 | orchestrator | 00:02:27.088 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-06 00:02:27.088605 | orchestrator | 00:02:27.088 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-06 00:02:27.091714 | orchestrator | 00:02:27.091 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-06 00:02:27.097930 | orchestrator | 00:02:27.097 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-06 00:02:27.116651 | orchestrator | 00:02:27.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-06 00:02:30.457850 | orchestrator | 00:02:30.457 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=68d9bca1-a6ba-43fd-9ad2-ffd55409eff8/59e1d33e-4f47-4176-9d4f-6bd749639634] 2025-09-06 00:02:30.480228 | orchestrator | 00:02:30.479 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=68576aeb-f076-4b92-97d4-d2b2381e8c90/8526d803-93b6-4435-afbc-8fa992e96ed7] 2025-09-06 00:02:30.492181 | orchestrator | 00:02:30.491 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=21fba27d-b0af-4319-97e2-afe8f72e5324/4b95c2e9-50f3-4582-afe8-fe749e38f7c5] 2025-09-06 00:02:30.506676 | orchestrator | 00:02:30.506 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=68d9bca1-a6ba-43fd-9ad2-ffd55409eff8/a6f67441-1efd-42d1-ae3b-c249d4af45c4] 2025-09-06 00:02:30.525666 | orchestrator | 00:02:30.525 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=68576aeb-f076-4b92-97d4-d2b2381e8c90/60cce0b1-ac13-42c3-8474-28bd0504aaba] 2025-09-06 00:02:30.533919 | orchestrator | 00:02:30.533 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=21fba27d-b0af-4319-97e2-afe8f72e5324/ff2df27d-11ce-481a-9d5b-51960fd8aeff] 2025-09-06 00:02:33.289810 | orchestrator | 00:02:33.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=21fba27d-b0af-4319-97e2-afe8f72e5324/25619c3a-8da8-43cb-a754-e63f9339b6a8] 2025-09-06 00:02:36.604122 | orchestrator | 00:02:36.603 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=68d9bca1-a6ba-43fd-9ad2-ffd55409eff8/8fcef200-ddbb-407c-9fba-bf8a684fde8b] 2025-09-06 00:02:36.622148 | orchestrator | 00:02:36.621 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=68576aeb-f076-4b92-97d4-d2b2381e8c90/ff2245c5-2416-47aa-a035-68e781151c74] 2025-09-06 00:02:37.101952 | orchestrator | 00:02:37.101 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-06 00:02:47.102912 | orchestrator | 00:02:47.102 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-06 00:02:47.496855 | orchestrator | 00:02:47.494 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=95e1b466-53a4-4944-9c59-8323138c1754] 2025-09-06 00:02:47.526870 | orchestrator | 00:02:47.526 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-06 00:02:47.526996 | orchestrator | 00:02:47.526 STDOUT terraform: Outputs: 2025-09-06 00:02:47.527015 | orchestrator | 00:02:47.526 STDOUT terraform: manager_address = 2025-09-06 00:02:47.527029 | orchestrator | 00:02:47.526 STDOUT terraform: private_key = 2025-09-06 00:02:47.648077 | orchestrator | ok: Runtime: 0:01:20.137508 2025-09-06 00:02:47.689856 | 2025-09-06 00:02:47.689990 | TASK [Create infrastructure (stable)] 2025-09-06 00:02:48.241640 | orchestrator | skipping: Conditional result was False 2025-09-06 00:02:48.259432 | 2025-09-06 00:02:48.259583 | TASK [Fetch manager address] 2025-09-06 00:02:48.695053 | orchestrator | ok 2025-09-06 00:02:48.705360 | 2025-09-06 00:02:48.705486 | TASK [Set manager_host address] 2025-09-06 00:02:48.785739 | orchestrator | ok 2025-09-06 00:02:48.796001 | 2025-09-06 00:02:48.796125 | LOOP [Update ansible collections] 2025-09-06 00:02:49.597466 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-06 00:02:49.597883 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-06 00:02:49.597960 | orchestrator | Starting galaxy collection install process 2025-09-06 00:02:49.598002 | orchestrator | Process install dependency map 2025-09-06 00:02:49.598037 | orchestrator | Starting collection install process 2025-09-06 00:02:49.598069 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-09-06 00:02:49.598109 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-09-06 00:02:49.598148 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-06 00:02:49.598222 | orchestrator | ok: Item: commons Runtime: 0:00:00.486640 2025-09-06 00:02:50.422561 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-06 00:02:50.422931 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-06 00:02:50.423004 | orchestrator | Starting galaxy collection install process 2025-09-06 00:02:50.423041 | orchestrator | Process install dependency map 2025-09-06 00:02:50.423072 | orchestrator | Starting collection install process 2025-09-06 00:02:50.423101 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-09-06 00:02:50.423130 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-09-06 00:02:50.423157 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-06 00:02:50.423203 | orchestrator | ok: Item: services Runtime: 0:00:00.576283 2025-09-06 00:02:50.438967 | 2025-09-06 00:02:50.439087 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-06 00:03:00.977524 | orchestrator | ok 2025-09-06 00:03:00.987544 | 2025-09-06 00:03:00.987657 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-06 00:04:01.030217 | orchestrator | ok 2025-09-06 00:04:01.040886 | 2025-09-06 00:04:01.040997 | TASK [Fetch manager ssh hostkey] 2025-09-06 00:04:02.626933 | orchestrator | Output suppressed because no_log was given 2025-09-06 00:04:02.642624 | 2025-09-06 00:04:02.642805 | TASK [Get ssh keypair from terraform environment] 2025-09-06 00:04:03.178993 | orchestrator | ok: Runtime: 0:00:00.007837 2025-09-06 00:04:03.199496 | 2025-09-06 00:04:03.199681 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-06 00:04:03.253302 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-06 00:04:03.266160 | 2025-09-06 00:04:03.266300 | TASK [Run manager part 0] 2025-09-06 00:04:04.063672 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-06 00:04:04.106966 | orchestrator | 2025-09-06 00:04:04.107011 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-06 00:04:04.107019 | orchestrator | 2025-09-06 00:04:04.107031 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-06 00:04:05.779495 | orchestrator | ok: [testbed-manager] 2025-09-06 00:04:05.779560 | orchestrator | 2025-09-06 00:04:05.779585 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-06 00:04:05.779595 | orchestrator | 2025-09-06 00:04:05.779604 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:04:07.619849 | orchestrator | ok: [testbed-manager] 2025-09-06 00:04:07.619912 | orchestrator | 2025-09-06 00:04:07.619921 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-06 00:04:08.256546 | orchestrator | ok: [testbed-manager] 2025-09-06 00:04:08.256590 | orchestrator | 2025-09-06 00:04:08.256598 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-06 00:04:08.309705 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.309795 | orchestrator | 2025-09-06 00:04:08.309807 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-06 00:04:08.340622 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.340669 | orchestrator | 2025-09-06 00:04:08.340679 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-06 00:04:08.381020 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.381072 | orchestrator | 2025-09-06 00:04:08.381082 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-06 00:04:08.413732 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.413799 | orchestrator | 2025-09-06 00:04:08.413808 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-06 00:04:08.442079 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.442117 | orchestrator | 2025-09-06 00:04:08.442125 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-06 00:04:08.468399 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.468438 | orchestrator | 2025-09-06 00:04:08.468450 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-06 00:04:08.496346 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:04:08.496384 | orchestrator | 2025-09-06 00:04:08.496392 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-06 00:04:09.253048 | orchestrator | changed: [testbed-manager] 2025-09-06 00:04:09.254470 | orchestrator | 2025-09-06 00:04:09.254500 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-06 00:06:37.857680 | orchestrator | changed: [testbed-manager] 2025-09-06 00:06:37.857756 | orchestrator | 2025-09-06 00:06:37.857768 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-06 00:07:55.454343 | orchestrator | changed: [testbed-manager] 2025-09-06 00:07:55.454523 | orchestrator | 2025-09-06 00:07:55.454544 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-06 00:08:20.290855 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:20.290903 | orchestrator | 2025-09-06 00:08:20.290914 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-06 00:08:29.270232 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:29.270328 | orchestrator | 2025-09-06 00:08:29.270345 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-06 00:08:29.317169 | orchestrator | ok: [testbed-manager] 2025-09-06 00:08:29.317210 | orchestrator | 2025-09-06 00:08:29.317219 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-06 00:08:30.059972 | orchestrator | ok: [testbed-manager] 2025-09-06 00:08:30.060059 | orchestrator | 2025-09-06 00:08:30.060079 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-06 00:08:30.749281 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:30.749368 | orchestrator | 2025-09-06 00:08:30.749385 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-06 00:08:38.414199 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:38.414286 | orchestrator | 2025-09-06 00:08:38.414331 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-06 00:08:44.150667 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:44.150777 | orchestrator | 2025-09-06 00:08:44.150788 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-06 00:08:46.637534 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:46.637579 | orchestrator | 2025-09-06 00:08:46.637587 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-06 00:08:48.359332 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:48.359409 | orchestrator | 2025-09-06 00:08:48.359424 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-06 00:08:49.401794 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-06 00:08:49.401877 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-06 00:08:49.401891 | orchestrator | 2025-09-06 00:08:49.401905 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-06 00:08:49.441994 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-06 00:08:49.442060 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-06 00:08:49.442067 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-06 00:08:49.442073 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-06 00:08:52.451954 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-06 00:08:52.452175 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-06 00:08:52.452190 | orchestrator | 2025-09-06 00:08:52.452200 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-06 00:08:53.008869 | orchestrator | changed: [testbed-manager] 2025-09-06 00:08:53.008947 | orchestrator | 2025-09-06 00:08:53.008961 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-06 00:10:13.621464 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-06 00:10:13.621565 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-06 00:10:13.621585 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-06 00:10:13.621598 | orchestrator | 2025-09-06 00:10:13.621737 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-06 00:10:15.854922 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-06 00:10:15.855019 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-06 00:10:15.855035 | orchestrator | 2025-09-06 00:10:15.855047 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-06 00:10:15.855060 | orchestrator | 2025-09-06 00:10:15.855071 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:10:17.279745 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:17.279833 | orchestrator | 2025-09-06 00:10:17.279852 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-06 00:10:17.320495 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:17.320552 | orchestrator | 2025-09-06 00:10:17.320565 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-06 00:10:17.380681 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:17.380728 | orchestrator | 2025-09-06 00:10:17.380740 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-06 00:10:18.088004 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:18.088082 | orchestrator | 2025-09-06 00:10:18.088098 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-06 00:10:18.781769 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:18.781854 | orchestrator | 2025-09-06 00:10:18.781870 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-06 00:10:20.096722 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-06 00:10:20.096807 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-06 00:10:20.096824 | orchestrator | 2025-09-06 00:10:20.096852 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-06 00:10:21.377467 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:21.377520 | orchestrator | 2025-09-06 00:10:21.377527 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-06 00:10:23.023152 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:10:23.023194 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-06 00:10:23.023202 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:10:23.023209 | orchestrator | 2025-09-06 00:10:23.023217 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-06 00:10:23.077038 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:23.077080 | orchestrator | 2025-09-06 00:10:23.077089 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-06 00:10:23.601999 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:23.602106 | orchestrator | 2025-09-06 00:10:23.602124 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-06 00:10:23.672205 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:23.672292 | orchestrator | 2025-09-06 00:10:23.672308 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-06 00:10:24.527291 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:10:24.527336 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:24.527346 | orchestrator | 2025-09-06 00:10:24.527354 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-06 00:10:24.564488 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:24.564531 | orchestrator | 2025-09-06 00:10:24.564540 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-06 00:10:24.601357 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:24.601399 | orchestrator | 2025-09-06 00:10:24.601409 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-06 00:10:24.636764 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:24.636805 | orchestrator | 2025-09-06 00:10:24.636814 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-06 00:10:24.682668 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:24.682710 | orchestrator | 2025-09-06 00:10:24.682720 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-06 00:10:25.370097 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:25.370185 | orchestrator | 2025-09-06 00:10:25.370201 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-06 00:10:25.370214 | orchestrator | 2025-09-06 00:10:25.370226 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:10:26.742794 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:26.742901 | orchestrator | 2025-09-06 00:10:26.742928 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-06 00:10:27.679530 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:27.679641 | orchestrator | 2025-09-06 00:10:27.679658 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:10:27.679672 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-06 00:10:27.679684 | orchestrator | 2025-09-06 00:10:28.044266 | orchestrator | ok: Runtime: 0:06:24.233657 2025-09-06 00:10:28.061994 | 2025-09-06 00:10:28.062119 | TASK [Point out that the log in on the manager is now possible] 2025-09-06 00:10:28.112503 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-06 00:10:28.124629 | 2025-09-06 00:10:28.124809 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-06 00:10:28.162817 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-06 00:10:28.173477 | 2025-09-06 00:10:28.173606 | TASK [Run manager part 1 + 2] 2025-09-06 00:10:29.007715 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-06 00:10:29.061675 | orchestrator | 2025-09-06 00:10:29.061751 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-06 00:10:29.061769 | orchestrator | 2025-09-06 00:10:29.061797 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:10:31.907030 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:31.907159 | orchestrator | 2025-09-06 00:10:31.907214 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-06 00:10:31.938631 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:31.938687 | orchestrator | 2025-09-06 00:10:31.938701 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-06 00:10:31.979252 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:31.979293 | orchestrator | 2025-09-06 00:10:31.979308 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-06 00:10:32.018119 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:32.018150 | orchestrator | 2025-09-06 00:10:32.018158 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-06 00:10:32.076222 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:32.076255 | orchestrator | 2025-09-06 00:10:32.076262 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-06 00:10:32.129177 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:32.129208 | orchestrator | 2025-09-06 00:10:32.129214 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-06 00:10:32.166526 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-06 00:10:32.166554 | orchestrator | 2025-09-06 00:10:32.166558 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-06 00:10:32.866945 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:32.867015 | orchestrator | 2025-09-06 00:10:32.867031 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-06 00:10:32.915101 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:32.915182 | orchestrator | 2025-09-06 00:10:32.915198 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-06 00:10:34.140363 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:34.140446 | orchestrator | 2025-09-06 00:10:34.140465 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-06 00:10:34.662680 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:34.662754 | orchestrator | 2025-09-06 00:10:34.662772 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-06 00:10:35.726802 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:35.726867 | orchestrator | 2025-09-06 00:10:35.726882 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-06 00:10:50.795038 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:50.795248 | orchestrator | 2025-09-06 00:10:50.795265 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-06 00:10:51.420090 | orchestrator | ok: [testbed-manager] 2025-09-06 00:10:51.420142 | orchestrator | 2025-09-06 00:10:51.420149 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-06 00:10:51.462529 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:10:51.462590 | orchestrator | 2025-09-06 00:10:51.462597 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-06 00:10:52.317764 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:52.317801 | orchestrator | 2025-09-06 00:10:52.317807 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-06 00:10:53.189140 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:53.189208 | orchestrator | 2025-09-06 00:10:53.189220 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-06 00:10:53.729204 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:53.729270 | orchestrator | 2025-09-06 00:10:53.729283 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-06 00:10:53.764836 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-06 00:10:53.764885 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-06 00:10:53.764891 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-06 00:10:53.764895 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-06 00:10:55.804647 | orchestrator | changed: [testbed-manager] 2025-09-06 00:10:55.804713 | orchestrator | 2025-09-06 00:10:55.804721 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-06 00:11:04.596147 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-06 00:11:04.596196 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-06 00:11:04.596205 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-06 00:11:04.596212 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-06 00:11:04.596222 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-06 00:11:04.596228 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-06 00:11:04.596234 | orchestrator | 2025-09-06 00:11:04.596241 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-06 00:11:05.606421 | orchestrator | changed: [testbed-manager] 2025-09-06 00:11:05.606504 | orchestrator | 2025-09-06 00:11:05.606540 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-06 00:11:05.646616 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:11:05.646691 | orchestrator | 2025-09-06 00:11:05.646711 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-06 00:11:08.701250 | orchestrator | changed: [testbed-manager] 2025-09-06 00:11:08.701342 | orchestrator | 2025-09-06 00:11:08.701358 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-06 00:11:08.739861 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:11:08.739946 | orchestrator | 2025-09-06 00:11:08.739964 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-06 00:12:43.779239 | orchestrator | changed: [testbed-manager] 2025-09-06 00:12:43.779333 | orchestrator | 2025-09-06 00:12:43.779351 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-06 00:12:44.852426 | orchestrator | ok: [testbed-manager] 2025-09-06 00:12:44.852483 | orchestrator | 2025-09-06 00:12:44.852490 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:12:44.852498 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-06 00:12:44.852504 | orchestrator | 2025-09-06 00:12:45.288929 | orchestrator | ok: Runtime: 0:02:16.465159 2025-09-06 00:12:45.305971 | 2025-09-06 00:12:45.306122 | TASK [Reboot manager] 2025-09-06 00:12:46.842171 | orchestrator | ok: Runtime: 0:00:00.913541 2025-09-06 00:12:46.860784 | 2025-09-06 00:12:46.860929 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-06 00:13:00.313493 | orchestrator | ok 2025-09-06 00:13:00.321502 | 2025-09-06 00:13:00.321656 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-06 00:14:00.372909 | orchestrator | ok 2025-09-06 00:14:00.383064 | 2025-09-06 00:14:00.383190 | TASK [Deploy manager + bootstrap nodes] 2025-09-06 00:14:02.699373 | orchestrator | 2025-09-06 00:14:02.700489 | orchestrator | # DEPLOY MANAGER 2025-09-06 00:14:02.700532 | orchestrator | 2025-09-06 00:14:02.700548 | orchestrator | + set -e 2025-09-06 00:14:02.700562 | orchestrator | + echo 2025-09-06 00:14:02.700576 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-06 00:14:02.700595 | orchestrator | + echo 2025-09-06 00:14:02.700649 | orchestrator | + cat /opt/manager-vars.sh 2025-09-06 00:14:02.702783 | orchestrator | export NUMBER_OF_NODES=6 2025-09-06 00:14:02.702811 | orchestrator | 2025-09-06 00:14:02.702895 | orchestrator | export CEPH_VERSION=reef 2025-09-06 00:14:02.702911 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-06 00:14:02.702924 | orchestrator | export MANAGER_VERSION=latest 2025-09-06 00:14:02.702947 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-06 00:14:02.702958 | orchestrator | 2025-09-06 00:14:02.702977 | orchestrator | export ARA=false 2025-09-06 00:14:02.702989 | orchestrator | export DEPLOY_MODE=manager 2025-09-06 00:14:02.703006 | orchestrator | export TEMPEST=true 2025-09-06 00:14:02.703018 | orchestrator | export IS_ZUUL=true 2025-09-06 00:14:02.703029 | orchestrator | 2025-09-06 00:14:02.703048 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:14:02.703060 | orchestrator | export EXTERNAL_API=false 2025-09-06 00:14:02.703070 | orchestrator | 2025-09-06 00:14:02.703081 | orchestrator | export IMAGE_USER=ubuntu 2025-09-06 00:14:02.703095 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-06 00:14:02.703106 | orchestrator | 2025-09-06 00:14:02.703117 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-06 00:14:02.703134 | orchestrator | 2025-09-06 00:14:02.703146 | orchestrator | + echo 2025-09-06 00:14:02.703158 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-06 00:14:02.703820 | orchestrator | ++ export INTERACTIVE=false 2025-09-06 00:14:02.703844 | orchestrator | ++ INTERACTIVE=false 2025-09-06 00:14:02.703903 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-06 00:14:02.703921 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-06 00:14:02.704045 | orchestrator | + source /opt/manager-vars.sh 2025-09-06 00:14:02.704060 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-06 00:14:02.704106 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-06 00:14:02.704120 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-06 00:14:02.704139 | orchestrator | ++ CEPH_VERSION=reef 2025-09-06 00:14:02.704150 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-06 00:14:02.704162 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-06 00:14:02.704173 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-06 00:14:02.704184 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-06 00:14:02.704233 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-06 00:14:02.704256 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-06 00:14:02.704267 | orchestrator | ++ export ARA=false 2025-09-06 00:14:02.704279 | orchestrator | ++ ARA=false 2025-09-06 00:14:02.704319 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-06 00:14:02.704338 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-06 00:14:02.704349 | orchestrator | ++ export TEMPEST=true 2025-09-06 00:14:02.704360 | orchestrator | ++ TEMPEST=true 2025-09-06 00:14:02.704371 | orchestrator | ++ export IS_ZUUL=true 2025-09-06 00:14:02.704382 | orchestrator | ++ IS_ZUUL=true 2025-09-06 00:14:02.704432 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:14:02.704445 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:14:02.704460 | orchestrator | ++ export EXTERNAL_API=false 2025-09-06 00:14:02.704472 | orchestrator | ++ EXTERNAL_API=false 2025-09-06 00:14:02.704483 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-06 00:14:02.704493 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-06 00:14:02.704504 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-06 00:14:02.704515 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-06 00:14:02.704526 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-06 00:14:02.704537 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-06 00:14:02.704549 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-06 00:14:02.759430 | orchestrator | + docker version 2025-09-06 00:14:03.028321 | orchestrator | Client: Docker Engine - Community 2025-09-06 00:14:03.028476 | orchestrator | Version: 27.5.1 2025-09-06 00:14:03.028495 | orchestrator | API version: 1.47 2025-09-06 00:14:03.028510 | orchestrator | Go version: go1.22.11 2025-09-06 00:14:03.028521 | orchestrator | Git commit: 9f9e405 2025-09-06 00:14:03.028532 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-06 00:14:03.028544 | orchestrator | OS/Arch: linux/amd64 2025-09-06 00:14:03.028555 | orchestrator | Context: default 2025-09-06 00:14:03.028566 | orchestrator | 2025-09-06 00:14:03.028578 | orchestrator | Server: Docker Engine - Community 2025-09-06 00:14:03.028589 | orchestrator | Engine: 2025-09-06 00:14:03.028762 | orchestrator | Version: 27.5.1 2025-09-06 00:14:03.028802 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-06 00:14:03.028847 | orchestrator | Go version: go1.22.11 2025-09-06 00:14:03.028859 | orchestrator | Git commit: 4c9b3b0 2025-09-06 00:14:03.028870 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-06 00:14:03.028881 | orchestrator | OS/Arch: linux/amd64 2025-09-06 00:14:03.028892 | orchestrator | Experimental: false 2025-09-06 00:14:03.028911 | orchestrator | containerd: 2025-09-06 00:14:03.029052 | orchestrator | Version: 1.7.27 2025-09-06 00:14:03.029070 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-06 00:14:03.029082 | orchestrator | runc: 2025-09-06 00:14:03.029230 | orchestrator | Version: 1.2.5 2025-09-06 00:14:03.029247 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-06 00:14:03.029258 | orchestrator | docker-init: 2025-09-06 00:14:03.029430 | orchestrator | Version: 0.19.0 2025-09-06 00:14:03.029449 | orchestrator | GitCommit: de40ad0 2025-09-06 00:14:03.032836 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-06 00:14:03.042365 | orchestrator | + set -e 2025-09-06 00:14:03.042390 | orchestrator | + source /opt/manager-vars.sh 2025-09-06 00:14:03.042429 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-06 00:14:03.042442 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-06 00:14:03.042454 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-06 00:14:03.042466 | orchestrator | ++ CEPH_VERSION=reef 2025-09-06 00:14:03.042603 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-06 00:14:03.042627 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-06 00:14:03.042639 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-06 00:14:03.042650 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-06 00:14:03.042661 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-06 00:14:03.042672 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-06 00:14:03.042683 | orchestrator | ++ export ARA=false 2025-09-06 00:14:03.042694 | orchestrator | ++ ARA=false 2025-09-06 00:14:03.042705 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-06 00:14:03.042717 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-06 00:14:03.042734 | orchestrator | ++ export TEMPEST=true 2025-09-06 00:14:03.042745 | orchestrator | ++ TEMPEST=true 2025-09-06 00:14:03.042761 | orchestrator | ++ export IS_ZUUL=true 2025-09-06 00:14:03.042773 | orchestrator | ++ IS_ZUUL=true 2025-09-06 00:14:03.042784 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:14:03.042795 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:14:03.042806 | orchestrator | ++ export EXTERNAL_API=false 2025-09-06 00:14:03.042817 | orchestrator | ++ EXTERNAL_API=false 2025-09-06 00:14:03.042966 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-06 00:14:03.042989 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-06 00:14:03.043000 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-06 00:14:03.043011 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-06 00:14:03.043022 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-06 00:14:03.043033 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-06 00:14:03.043044 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-06 00:14:03.043055 | orchestrator | ++ export INTERACTIVE=false 2025-09-06 00:14:03.043066 | orchestrator | ++ INTERACTIVE=false 2025-09-06 00:14:03.043080 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-06 00:14:03.043098 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-06 00:14:03.043267 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-06 00:14:03.043287 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-06 00:14:03.043298 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-06 00:14:03.050802 | orchestrator | + set -e 2025-09-06 00:14:03.050841 | orchestrator | + VERSION=reef 2025-09-06 00:14:03.052104 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-06 00:14:03.057773 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-06 00:14:03.057796 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-06 00:14:03.063276 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-06 00:14:03.070209 | orchestrator | + set -e 2025-09-06 00:14:03.070230 | orchestrator | + VERSION=2024.2 2025-09-06 00:14:03.070735 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-06 00:14:03.074534 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-06 00:14:03.074554 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-06 00:14:03.079951 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-06 00:14:03.080916 | orchestrator | ++ semver latest 7.0.0 2025-09-06 00:14:03.144872 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-06 00:14:03.144946 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-06 00:14:03.144959 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-06 00:14:03.144972 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-06 00:14:03.230892 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-06 00:14:03.232186 | orchestrator | + source /opt/venv/bin/activate 2025-09-06 00:14:03.233190 | orchestrator | ++ deactivate nondestructive 2025-09-06 00:14:03.233219 | orchestrator | ++ '[' -n '' ']' 2025-09-06 00:14:03.233232 | orchestrator | ++ '[' -n '' ']' 2025-09-06 00:14:03.233245 | orchestrator | ++ hash -r 2025-09-06 00:14:03.233437 | orchestrator | ++ '[' -n '' ']' 2025-09-06 00:14:03.233455 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-06 00:14:03.233465 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-06 00:14:03.233481 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-06 00:14:03.233611 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-06 00:14:03.233629 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-06 00:14:03.233640 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-06 00:14:03.233651 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-06 00:14:03.233663 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-06 00:14:03.233685 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-06 00:14:03.233697 | orchestrator | ++ export PATH 2025-09-06 00:14:03.233852 | orchestrator | ++ '[' -n '' ']' 2025-09-06 00:14:03.233868 | orchestrator | ++ '[' -z '' ']' 2025-09-06 00:14:03.233879 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-06 00:14:03.233894 | orchestrator | ++ PS1='(venv) ' 2025-09-06 00:14:03.233905 | orchestrator | ++ export PS1 2025-09-06 00:14:03.233916 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-06 00:14:03.233998 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-06 00:14:03.234012 | orchestrator | ++ hash -r 2025-09-06 00:14:03.234229 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-06 00:14:04.426826 | orchestrator | 2025-09-06 00:14:04.426926 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-06 00:14:04.426949 | orchestrator | 2025-09-06 00:14:04.426969 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-06 00:14:04.967550 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:04.967672 | orchestrator | 2025-09-06 00:14:04.967689 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-06 00:14:05.934157 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:05.934266 | orchestrator | 2025-09-06 00:14:05.934283 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-06 00:14:05.934296 | orchestrator | 2025-09-06 00:14:05.934308 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:14:09.191178 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:09.191294 | orchestrator | 2025-09-06 00:14:09.191313 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-06 00:14:09.246363 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:09.246478 | orchestrator | 2025-09-06 00:14:09.246496 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-06 00:14:09.691366 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:09.691506 | orchestrator | 2025-09-06 00:14:09.691524 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-06 00:14:09.734137 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:09.734192 | orchestrator | 2025-09-06 00:14:09.734206 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-06 00:14:10.072105 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:10.072219 | orchestrator | 2025-09-06 00:14:10.072237 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-06 00:14:10.124944 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:10.125024 | orchestrator | 2025-09-06 00:14:10.125039 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-06 00:14:10.448020 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:10.448121 | orchestrator | 2025-09-06 00:14:10.448137 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-06 00:14:10.559281 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:10.559365 | orchestrator | 2025-09-06 00:14:10.559379 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-06 00:14:10.559420 | orchestrator | 2025-09-06 00:14:10.559436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:14:13.243258 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:13.243355 | orchestrator | 2025-09-06 00:14:13.243378 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-06 00:14:13.352737 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-06 00:14:13.352820 | orchestrator | 2025-09-06 00:14:13.352842 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-06 00:14:13.416263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-06 00:14:13.416318 | orchestrator | 2025-09-06 00:14:13.416338 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-06 00:14:14.479156 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-06 00:14:14.479255 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-06 00:14:14.479270 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-06 00:14:14.479282 | orchestrator | 2025-09-06 00:14:14.479294 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-06 00:14:16.247927 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-06 00:14:16.248029 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-06 00:14:16.248047 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-06 00:14:16.248059 | orchestrator | 2025-09-06 00:14:16.248072 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-06 00:14:16.882261 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:14:16.882369 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:16.882385 | orchestrator | 2025-09-06 00:14:16.882455 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-06 00:14:17.492446 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:14:17.492538 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:17.492552 | orchestrator | 2025-09-06 00:14:17.492564 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-06 00:14:17.538329 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:17.538356 | orchestrator | 2025-09-06 00:14:17.538368 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-06 00:14:17.889480 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:17.889542 | orchestrator | 2025-09-06 00:14:17.889554 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-06 00:14:17.965211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-06 00:14:17.965278 | orchestrator | 2025-09-06 00:14:17.965290 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-06 00:14:18.950729 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:18.950837 | orchestrator | 2025-09-06 00:14:18.950852 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-06 00:14:19.760714 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:19.760811 | orchestrator | 2025-09-06 00:14:19.760827 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-06 00:14:31.475532 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:31.475666 | orchestrator | 2025-09-06 00:14:31.475694 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-06 00:14:31.521154 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:31.521230 | orchestrator | 2025-09-06 00:14:31.521238 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-06 00:14:31.521245 | orchestrator | 2025-09-06 00:14:31.521251 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:14:33.203256 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:33.203363 | orchestrator | 2025-09-06 00:14:33.203456 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-06 00:14:33.309625 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-06 00:14:33.309687 | orchestrator | 2025-09-06 00:14:33.309699 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-06 00:14:33.366137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:14:33.366164 | orchestrator | 2025-09-06 00:14:33.366175 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-06 00:14:35.793529 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:35.793647 | orchestrator | 2025-09-06 00:14:35.793665 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-06 00:14:35.844298 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:35.844473 | orchestrator | 2025-09-06 00:14:35.844498 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-06 00:14:35.960109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-06 00:14:35.960188 | orchestrator | 2025-09-06 00:14:35.960201 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-06 00:14:38.733005 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-06 00:14:38.733113 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-06 00:14:38.733128 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-06 00:14:38.733140 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-06 00:14:38.733151 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-06 00:14:38.733162 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-06 00:14:38.733173 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-06 00:14:38.733184 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-06 00:14:38.733195 | orchestrator | 2025-09-06 00:14:38.733207 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-06 00:14:39.347226 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:39.347330 | orchestrator | 2025-09-06 00:14:39.347345 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-06 00:14:39.932943 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:39.933040 | orchestrator | 2025-09-06 00:14:39.933055 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-06 00:14:40.002166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-06 00:14:40.002263 | orchestrator | 2025-09-06 00:14:40.002280 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-06 00:14:41.187690 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-06 00:14:41.187791 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-06 00:14:41.187805 | orchestrator | 2025-09-06 00:14:41.187818 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-06 00:14:41.770178 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:41.770286 | orchestrator | 2025-09-06 00:14:41.770301 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-06 00:14:41.817309 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:41.817427 | orchestrator | 2025-09-06 00:14:41.817443 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-06 00:14:41.883173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-06 00:14:41.883236 | orchestrator | 2025-09-06 00:14:41.883250 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-06 00:14:42.493740 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:42.493836 | orchestrator | 2025-09-06 00:14:42.493851 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-06 00:14:42.558142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-06 00:14:42.558219 | orchestrator | 2025-09-06 00:14:42.558234 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-06 00:14:43.851835 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:14:43.851943 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:14:43.851958 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:43.851972 | orchestrator | 2025-09-06 00:14:43.851984 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-06 00:14:44.470740 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:44.470878 | orchestrator | 2025-09-06 00:14:44.470901 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-06 00:14:44.524652 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:44.524741 | orchestrator | 2025-09-06 00:14:44.524756 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-06 00:14:44.614428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-06 00:14:44.614522 | orchestrator | 2025-09-06 00:14:44.614537 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-06 00:14:45.117499 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:45.117601 | orchestrator | 2025-09-06 00:14:45.117616 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-06 00:14:45.515966 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:45.516087 | orchestrator | 2025-09-06 00:14:45.516104 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-06 00:14:46.712541 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-06 00:14:46.712644 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-06 00:14:46.712658 | orchestrator | 2025-09-06 00:14:46.712670 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-06 00:14:47.319168 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:47.319258 | orchestrator | 2025-09-06 00:14:47.319270 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-06 00:14:47.694574 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:47.694673 | orchestrator | 2025-09-06 00:14:47.694688 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-06 00:14:48.025864 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:48.025961 | orchestrator | 2025-09-06 00:14:48.025977 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-06 00:14:48.072209 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:48.072274 | orchestrator | 2025-09-06 00:14:48.072290 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-06 00:14:48.132136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-06 00:14:48.132201 | orchestrator | 2025-09-06 00:14:48.132214 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-06 00:14:48.170649 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:48.170692 | orchestrator | 2025-09-06 00:14:48.170705 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-06 00:14:50.117033 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-06 00:14:50.117143 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-06 00:14:50.117160 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-06 00:14:50.117172 | orchestrator | 2025-09-06 00:14:50.117185 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-06 00:14:50.808260 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:50.808430 | orchestrator | 2025-09-06 00:14:50.808469 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-06 00:14:51.509351 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:51.509508 | orchestrator | 2025-09-06 00:14:51.509526 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-06 00:14:52.203688 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:52.203800 | orchestrator | 2025-09-06 00:14:52.203816 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-06 00:14:52.271429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-06 00:14:52.271510 | orchestrator | 2025-09-06 00:14:52.271523 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-06 00:14:52.323890 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:52.323966 | orchestrator | 2025-09-06 00:14:52.323979 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-06 00:14:53.006723 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-06 00:14:53.006836 | orchestrator | 2025-09-06 00:14:53.006850 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-06 00:14:53.082433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-06 00:14:53.082525 | orchestrator | 2025-09-06 00:14:53.082538 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-06 00:14:53.757447 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:53.757555 | orchestrator | 2025-09-06 00:14:53.757571 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-06 00:14:54.311524 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:54.311624 | orchestrator | 2025-09-06 00:14:54.311639 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-06 00:14:54.364918 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:14:54.364991 | orchestrator | 2025-09-06 00:14:54.365004 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-06 00:14:54.415106 | orchestrator | ok: [testbed-manager] 2025-09-06 00:14:54.415145 | orchestrator | 2025-09-06 00:14:54.415156 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-06 00:14:55.218289 | orchestrator | changed: [testbed-manager] 2025-09-06 00:14:55.218439 | orchestrator | 2025-09-06 00:14:55.218457 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-06 00:16:22.353946 | orchestrator | changed: [testbed-manager] 2025-09-06 00:16:22.354131 | orchestrator | 2025-09-06 00:16:22.354153 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-06 00:16:23.373345 | orchestrator | ok: [testbed-manager] 2025-09-06 00:16:23.373457 | orchestrator | 2025-09-06 00:16:23.373476 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-06 00:16:23.430748 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:16:23.430812 | orchestrator | 2025-09-06 00:16:23.430829 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-06 00:16:25.751099 | orchestrator | changed: [testbed-manager] 2025-09-06 00:16:25.751214 | orchestrator | 2025-09-06 00:16:25.751231 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-06 00:16:25.799159 | orchestrator | ok: [testbed-manager] 2025-09-06 00:16:25.799244 | orchestrator | 2025-09-06 00:16:25.799258 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-06 00:16:25.799270 | orchestrator | 2025-09-06 00:16:25.799281 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-06 00:16:25.844359 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:16:25.844403 | orchestrator | 2025-09-06 00:16:25.844415 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-06 00:17:25.900768 | orchestrator | Pausing for 60 seconds 2025-09-06 00:17:25.900920 | orchestrator | changed: [testbed-manager] 2025-09-06 00:17:25.900950 | orchestrator | 2025-09-06 00:17:25.900971 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-06 00:17:30.040541 | orchestrator | changed: [testbed-manager] 2025-09-06 00:17:30.040648 | orchestrator | 2025-09-06 00:17:30.040665 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-06 00:18:11.740741 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-06 00:18:11.740868 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-06 00:18:11.740885 | orchestrator | changed: [testbed-manager] 2025-09-06 00:18:11.740926 | orchestrator | 2025-09-06 00:18:11.740939 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-06 00:18:20.751991 | orchestrator | changed: [testbed-manager] 2025-09-06 00:18:20.752093 | orchestrator | 2025-09-06 00:18:20.752110 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-06 00:18:20.835299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-06 00:18:20.835361 | orchestrator | 2025-09-06 00:18:20.835374 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-06 00:18:20.835386 | orchestrator | 2025-09-06 00:18:20.835397 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-06 00:18:20.875435 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:18:20.875481 | orchestrator | 2025-09-06 00:18:20.875498 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:18:20.875512 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-06 00:18:20.875523 | orchestrator | 2025-09-06 00:18:20.936111 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-06 00:18:20.936161 | orchestrator | + deactivate 2025-09-06 00:18:20.936175 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-06 00:18:20.936187 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-06 00:18:20.936198 | orchestrator | + export PATH 2025-09-06 00:18:20.936209 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-06 00:18:20.936221 | orchestrator | + '[' -n '' ']' 2025-09-06 00:18:20.936232 | orchestrator | + hash -r 2025-09-06 00:18:20.936264 | orchestrator | + '[' -n '' ']' 2025-09-06 00:18:20.936275 | orchestrator | + unset VIRTUAL_ENV 2025-09-06 00:18:20.936287 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-06 00:18:20.936415 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-06 00:18:20.936501 | orchestrator | + unset -f deactivate 2025-09-06 00:18:20.936519 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-06 00:18:20.943570 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-06 00:18:20.943632 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-06 00:18:20.943645 | orchestrator | + local max_attempts=60 2025-09-06 00:18:20.943657 | orchestrator | + local name=ceph-ansible 2025-09-06 00:18:20.943669 | orchestrator | + local attempt_num=1 2025-09-06 00:18:20.944070 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:18:20.971076 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:18:20.971101 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-06 00:18:20.971112 | orchestrator | + local max_attempts=60 2025-09-06 00:18:20.971123 | orchestrator | + local name=kolla-ansible 2025-09-06 00:18:20.971134 | orchestrator | + local attempt_num=1 2025-09-06 00:18:20.972691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-06 00:18:20.997597 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:18:20.997629 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-06 00:18:20.997641 | orchestrator | + local max_attempts=60 2025-09-06 00:18:20.997653 | orchestrator | + local name=osism-ansible 2025-09-06 00:18:20.997665 | orchestrator | + local attempt_num=1 2025-09-06 00:18:20.998106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-06 00:18:21.022540 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:18:21.022573 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-06 00:18:21.022586 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-06 00:18:21.698253 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-06 00:18:21.894622 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-06 00:18:21.894708 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894725 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894761 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-06 00:18:21.894774 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-06 00:18:21.894794 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894806 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894817 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-06 00:18:21.894828 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894839 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-06 00:18:21.894849 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894860 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-06 00:18:21.894871 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894881 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-06 00:18:21.894892 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.894903 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-06 00:18:21.899181 | orchestrator | ++ semver latest 7.0.0 2025-09-06 00:18:21.940388 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-06 00:18:21.940414 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-06 00:18:21.940427 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-06 00:18:21.945099 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-06 00:18:33.858187 | orchestrator | 2025-09-06 00:18:33 | INFO  | Task b9466e5b-386b-4ebb-8a3d-4d8507203677 (resolvconf) was prepared for execution. 2025-09-06 00:18:33.858326 | orchestrator | 2025-09-06 00:18:33 | INFO  | It takes a moment until task b9466e5b-386b-4ebb-8a3d-4d8507203677 (resolvconf) has been started and output is visible here. 2025-09-06 00:18:46.781455 | orchestrator | 2025-09-06 00:18:46.781565 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-06 00:18:46.781581 | orchestrator | 2025-09-06 00:18:46.781593 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:18:46.781631 | orchestrator | Saturday 06 September 2025 00:18:37 +0000 (0:00:00.140) 0:00:00.140 **** 2025-09-06 00:18:46.781643 | orchestrator | ok: [testbed-manager] 2025-09-06 00:18:46.781656 | orchestrator | 2025-09-06 00:18:46.781667 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-06 00:18:46.781678 | orchestrator | Saturday 06 September 2025 00:18:41 +0000 (0:00:03.424) 0:00:03.565 **** 2025-09-06 00:18:46.781689 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:18:46.781701 | orchestrator | 2025-09-06 00:18:46.781712 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-06 00:18:46.781722 | orchestrator | Saturday 06 September 2025 00:18:41 +0000 (0:00:00.065) 0:00:03.631 **** 2025-09-06 00:18:46.781733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-06 00:18:46.781746 | orchestrator | 2025-09-06 00:18:46.781757 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-06 00:18:46.781768 | orchestrator | Saturday 06 September 2025 00:18:41 +0000 (0:00:00.087) 0:00:03.719 **** 2025-09-06 00:18:46.781779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:18:46.781790 | orchestrator | 2025-09-06 00:18:46.781800 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-06 00:18:46.781811 | orchestrator | Saturday 06 September 2025 00:18:41 +0000 (0:00:00.077) 0:00:03.796 **** 2025-09-06 00:18:46.781821 | orchestrator | ok: [testbed-manager] 2025-09-06 00:18:46.781832 | orchestrator | 2025-09-06 00:18:46.781843 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-06 00:18:46.781853 | orchestrator | Saturday 06 September 2025 00:18:42 +0000 (0:00:00.995) 0:00:04.791 **** 2025-09-06 00:18:46.781864 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:18:46.781875 | orchestrator | 2025-09-06 00:18:46.781885 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-06 00:18:46.781896 | orchestrator | Saturday 06 September 2025 00:18:42 +0000 (0:00:00.063) 0:00:04.855 **** 2025-09-06 00:18:46.781907 | orchestrator | ok: [testbed-manager] 2025-09-06 00:18:46.781917 | orchestrator | 2025-09-06 00:18:46.781928 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-06 00:18:46.781938 | orchestrator | Saturday 06 September 2025 00:18:42 +0000 (0:00:00.458) 0:00:05.313 **** 2025-09-06 00:18:46.781949 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:18:46.781960 | orchestrator | 2025-09-06 00:18:46.781971 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-06 00:18:46.781982 | orchestrator | Saturday 06 September 2025 00:18:42 +0000 (0:00:00.078) 0:00:05.392 **** 2025-09-06 00:18:46.781993 | orchestrator | changed: [testbed-manager] 2025-09-06 00:18:46.782003 | orchestrator | 2025-09-06 00:18:46.782050 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-06 00:18:46.782064 | orchestrator | Saturday 06 September 2025 00:18:43 +0000 (0:00:00.510) 0:00:05.903 **** 2025-09-06 00:18:46.782075 | orchestrator | changed: [testbed-manager] 2025-09-06 00:18:46.782085 | orchestrator | 2025-09-06 00:18:46.782096 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-06 00:18:46.782136 | orchestrator | Saturday 06 September 2025 00:18:44 +0000 (0:00:01.050) 0:00:06.954 **** 2025-09-06 00:18:46.782147 | orchestrator | ok: [testbed-manager] 2025-09-06 00:18:46.782157 | orchestrator | 2025-09-06 00:18:46.782168 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-06 00:18:46.782178 | orchestrator | Saturday 06 September 2025 00:18:45 +0000 (0:00:00.920) 0:00:07.874 **** 2025-09-06 00:18:46.782200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-06 00:18:46.782221 | orchestrator | 2025-09-06 00:18:46.782232 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-06 00:18:46.782242 | orchestrator | Saturday 06 September 2025 00:18:45 +0000 (0:00:00.064) 0:00:07.938 **** 2025-09-06 00:18:46.782253 | orchestrator | changed: [testbed-manager] 2025-09-06 00:18:46.782264 | orchestrator | 2025-09-06 00:18:46.782274 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:18:46.782286 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:18:46.782297 | orchestrator | 2025-09-06 00:18:46.782308 | orchestrator | 2025-09-06 00:18:46.782319 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:18:46.782329 | orchestrator | Saturday 06 September 2025 00:18:46 +0000 (0:00:01.081) 0:00:09.020 **** 2025-09-06 00:18:46.782340 | orchestrator | =============================================================================== 2025-09-06 00:18:46.782351 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2025-09-06 00:18:46.782361 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2025-09-06 00:18:46.782372 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-09-06 00:18:46.782383 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2025-09-06 00:18:46.782393 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-09-06 00:18:46.782404 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-09-06 00:18:46.782432 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-09-06 00:18:46.782444 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-06 00:18:46.782455 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-06 00:18:46.782466 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-06 00:18:46.782476 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-06 00:18:46.782487 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2025-09-06 00:18:46.782498 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-06 00:18:47.061971 | orchestrator | + osism apply sshconfig 2025-09-06 00:18:59.106345 | orchestrator | 2025-09-06 00:18:59 | INFO  | Task 981a3c95-546d-4d45-9229-756bc725bb86 (sshconfig) was prepared for execution. 2025-09-06 00:18:59.106467 | orchestrator | 2025-09-06 00:18:59 | INFO  | It takes a moment until task 981a3c95-546d-4d45-9229-756bc725bb86 (sshconfig) has been started and output is visible here. 2025-09-06 00:19:10.275016 | orchestrator | 2025-09-06 00:19:10.275154 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-06 00:19:10.275169 | orchestrator | 2025-09-06 00:19:10.275181 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-06 00:19:10.275192 | orchestrator | Saturday 06 September 2025 00:19:02 +0000 (0:00:00.155) 0:00:00.155 **** 2025-09-06 00:19:10.275203 | orchestrator | ok: [testbed-manager] 2025-09-06 00:19:10.275215 | orchestrator | 2025-09-06 00:19:10.275226 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-06 00:19:10.275237 | orchestrator | Saturday 06 September 2025 00:19:03 +0000 (0:00:00.558) 0:00:00.714 **** 2025-09-06 00:19:10.275248 | orchestrator | changed: [testbed-manager] 2025-09-06 00:19:10.275259 | orchestrator | 2025-09-06 00:19:10.275270 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-06 00:19:10.275282 | orchestrator | Saturday 06 September 2025 00:19:03 +0000 (0:00:00.494) 0:00:01.208 **** 2025-09-06 00:19:10.275293 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-06 00:19:10.275304 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-06 00:19:10.275342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-06 00:19:10.275354 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-06 00:19:10.275365 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-06 00:19:10.275392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-06 00:19:10.275403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-06 00:19:10.275414 | orchestrator | 2025-09-06 00:19:10.275425 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-06 00:19:10.275436 | orchestrator | Saturday 06 September 2025 00:19:09 +0000 (0:00:05.531) 0:00:06.739 **** 2025-09-06 00:19:10.275446 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:19:10.275457 | orchestrator | 2025-09-06 00:19:10.275468 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-06 00:19:10.275479 | orchestrator | Saturday 06 September 2025 00:19:09 +0000 (0:00:00.064) 0:00:06.804 **** 2025-09-06 00:19:10.275489 | orchestrator | changed: [testbed-manager] 2025-09-06 00:19:10.275500 | orchestrator | 2025-09-06 00:19:10.275510 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:19:10.275523 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:19:10.275534 | orchestrator | 2025-09-06 00:19:10.275545 | orchestrator | 2025-09-06 00:19:10.275556 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:19:10.275568 | orchestrator | Saturday 06 September 2025 00:19:10 +0000 (0:00:00.558) 0:00:07.362 **** 2025-09-06 00:19:10.275581 | orchestrator | =============================================================================== 2025-09-06 00:19:10.275594 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.53s 2025-09-06 00:19:10.275607 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-09-06 00:19:10.275619 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-09-06 00:19:10.275631 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-09-06 00:19:10.275644 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-06 00:19:10.537731 | orchestrator | + osism apply known-hosts 2025-09-06 00:19:22.468656 | orchestrator | 2025-09-06 00:19:22 | INFO  | Task e402c84d-5789-4b1c-a3eb-f5fb2614b8c8 (known-hosts) was prepared for execution. 2025-09-06 00:19:22.469644 | orchestrator | 2025-09-06 00:19:22 | INFO  | It takes a moment until task e402c84d-5789-4b1c-a3eb-f5fb2614b8c8 (known-hosts) has been started and output is visible here. 2025-09-06 00:19:39.197912 | orchestrator | 2025-09-06 00:19:39.198120 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-06 00:19:39.198139 | orchestrator | 2025-09-06 00:19:39.198150 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-06 00:19:39.198161 | orchestrator | Saturday 06 September 2025 00:19:26 +0000 (0:00:00.121) 0:00:00.121 **** 2025-09-06 00:19:39.198172 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-06 00:19:39.198182 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-06 00:19:39.198192 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-06 00:19:39.198202 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-06 00:19:39.198211 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-06 00:19:39.198221 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-06 00:19:39.198230 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-06 00:19:39.198240 | orchestrator | 2025-09-06 00:19:39.198250 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-06 00:19:39.198261 | orchestrator | Saturday 06 September 2025 00:19:31 +0000 (0:00:05.672) 0:00:05.794 **** 2025-09-06 00:19:39.198294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-06 00:19:39.198307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-06 00:19:39.198316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-06 00:19:39.198326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-06 00:19:39.198336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-06 00:19:39.198356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-06 00:19:39.198367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-06 00:19:39.198376 | orchestrator | 2025-09-06 00:19:39.198386 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198396 | orchestrator | Saturday 06 September 2025 00:19:31 +0000 (0:00:00.141) 0:00:05.936 **** 2025-09-06 00:19:39.198407 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEUAj3YdektqxymTUh6v43G1WFKHEJIGoPSXXH2fHIZA6aRx8IICiDhYWK68mkeGwP4wMF6l1Vqu63aY9MsOUsE=) 2025-09-06 00:19:39.198421 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9kNep5dM1XALTPZG/Ngpcl2XJxRUToTyLHInvFlUgevy7HNhCpYfMEQGYcaJHb3l/8dpNl2Wbt8WQOb28vfxkzERd2Tp3i4ZLgyuCDCjEiX0DaOjeZx2tSLI5mymvgSvmL/T3ykwo8wLGLbc++P+RmutUfB0O/wmI+MChG730sByyjJ6Cuj8JCHBkbz0RyK6geUF6rnQ0sIog6XNNWhvdM9qDFNKmYdTlDCCEQPvuBRSlPOte6CBX+vLx/jfsophOj4KGL/eTy/rbLqFmpDdDp3iiH4XKj3gTFOp1q6hjrNVKnovFtAJ12hpdCZ/8ANs0bxXfFr8WKW7g7vfgTa2sAU1vhzljgsbNJOuu8jihDzsArrPBcvxevWxDR1w8D0j35Ds7nxpB3ZWOZ064PtlXuWzuAm71Vmni4M4Zjw7GwIwJsYZ3kjTtSmnOb0g0ZmVecg633p9SHFeuefi+EeNl7p+/yUs7p4xDhOBHONnoCulr34jfqaGtlGO4gw0QPKE=) 2025-09-06 00:19:39.198436 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0PZQ1itfuHNGARGxhcbmsAoA2FftVUyBCcJrcxtZGL) 2025-09-06 00:19:39.198449 | orchestrator | 2025-09-06 00:19:39.198460 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198471 | orchestrator | Saturday 06 September 2025 00:19:33 +0000 (0:00:01.064) 0:00:07.000 **** 2025-09-06 00:19:39.198482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOEC/F5yJ7QUBrejHKFG9pWIBdH4EFuVC8G06AarLm5H9diT3UDnj+EDmdX168F+mcG92zYUi3lH07oubjrs3pw=) 2025-09-06 00:19:39.198518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTxx5EMDRQrIBDVjtitZYdfnRv6dS9Zzo4qu9wzWh8V7Cjb66xo6eblV7Z8hfdSjxULknWgRKS3hZCfD0RglM/Eb37aU7+5x3+lTH0Ywc3BrHmcRxfy8/ekpJ4pihsi0+RM6ossQDRZ+82AXWtdOfT9gx2hoTG+biFILgWP97tzJPJbhZsavfaV0c48bBYaZsZ8WPOXpwtkGUxI7PsXdFgvO++d709+JaXlgPo5tfNacMqN13dAIlBZz8y88WCBwMc7JDWVcR6F9J4RmmloYyIDTShu8OCwhI8a6bL3Ng9reYI4t7Ug8GfdjEI4ob1RgmTLdcIutzHelZAubEtLdI4mqLSu73sBhG1tJ+TjHdqOdoNkwkkFV4QrEwvOy7SiYxOsEGr2WDBL24Kxl3yJysPjw7fKxYgHSTJ6/SUB4dcIxLrMX+c15UfX1ao7JNjRx8EVrc7orGfJ+m/x/NZme0P5uz7EWGoHAziREia/H8Uo900QfXBwZTHg+NMdddIK2c=) 2025-09-06 00:19:39.198531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFEGbIgSrIXEoKvJxHGMhgHwchv7sqtLrtAIMPppAnJ/) 2025-09-06 00:19:39.198551 | orchestrator | 2025-09-06 00:19:39.198563 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198574 | orchestrator | Saturday 06 September 2025 00:19:35 +0000 (0:00:02.026) 0:00:09.027 **** 2025-09-06 00:19:39.198587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPKcaLSD3zxlxigHlXXK4ePmQXwVVEy1DPF0iwjrudOdd4o/hK6SENfkcTa7DvuhsGX41LDrrv8b8uNkKO9qif8KRn+D4nFO9qKgTLkB5qw+74SxyST0qavEkpPGADXglgo902pA8rcy0FlaFCbX0B7DWQvLzaroqfcPwz4atPBJeCOH3cCQB6RpmfqJwYZ8y+udl8DGYrIpHS2om14+4RS7n4WhZZRPg8aAwpatWh/XlLccgZcHozqEMqRm1uBf8o1Z2rUTzHckO1WFeIRWXtw9NFti6F1Cbrzadk2mrKIzEPzhbyq7+k+js4FQ2xTM2hFfXi/ckzFh7Wq6G2mGFJDeRGDfKew5gGb+ZWGwGHqof1kRCmTqxPL5WYgNXzUf59QFvWwjnZH/qo5wKk424C+YI3L7aoY2nHMbIJ1jgBGlOhQVUq5i0O26wxyUhono2rz7Dr8qnDhKzD9wDEdm4Imdv0+2iZuqFY4w+eesSI1AIf7daRpTOOGN4T2h70RoE=) 2025-09-06 00:19:39.198599 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn1EIkstSaWGu9OMYWhyU/Iv7oXFcFkEEfCUqVbLZQ9+ALJ/DhLSt5X6o8bqBlDUTC8q9QDWNEnzbgYKGc4dNQ=) 2025-09-06 00:19:39.198610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAEuwypAKkalj8VHpsHl1C+j8vDyFgY5hb9KvFNkRATY) 2025-09-06 00:19:39.198621 | orchestrator | 2025-09-06 00:19:39.198633 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198643 | orchestrator | Saturday 06 September 2025 00:19:36 +0000 (0:00:01.002) 0:00:10.029 **** 2025-09-06 00:19:39.198654 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL9Cqiyg7I694WzqJwDNJo09RxU7ZB+Y5zNWGMbdWAdS) 2025-09-06 00:19:39.198726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGuohX7pBOi/71of6Qs9rUaZnhBLNFNiiarLomFpjCgQrGACs3e/ovPVZrPxG/h9tINFYtazpKCG+/ie87XkobPWTWn8h6R2Iwxlj+Pv5rtw4xAlQwv1NwfbXag+SAHe6TE9cFSv0KnCaP52Fu9mJn72TcwLzjdY4N4nRweAgxsVL83wOj+HPV9IgbMkg4qB4em7awQN5ddSJk59Ke67BPsUEktHlYAeOM6YD/L2juRcrbLRLCqgyAzPiKcpO+IzN66/Amo15RhIlquj7yxcYIIfssIRLTPyDeKWQ14pVqZd6iyH/fjhBfaNxcPeFo9BU61nuMkqRHqPlXb3bQXpjL0r5RNe+DnzhM9jTJsalDiyUVAX7lRPuDBd0/+Gb88N26RNslxwGyQQvuo6XNtNRJuy7LhKscUCzvpCy2jgRnbFAvOJwhQdOhu06y76wRzjqWijkC2FU9jz0COjw4xHdJ61+0ysy9FGrF44+7zEUGTX0kSwQKeRicOuaxNwSvQTM=) 2025-09-06 00:19:39.198739 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQj8KJxiRfFKwS29Y8EIfU0elP4MxCQ5aPbqqlFUY/vSlhT0Vfsnh6jLS+WwBNerjMdWybbQrsUcf5oir/1ltU=) 2025-09-06 00:19:39.198751 | orchestrator | 2025-09-06 00:19:39.198762 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198773 | orchestrator | Saturday 06 September 2025 00:19:37 +0000 (0:00:01.026) 0:00:11.055 **** 2025-09-06 00:19:39.198785 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI7tiN7AaGCSaqvcG6BnGPvm9I6NhfSZfiXG3EQGT0nIdukFDAX85jExqH8vhw4qfYYXW98tAizsti4NydR4IcE=) 2025-09-06 00:19:39.198796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRCH6O7KjJ4OGML3yc7av1B/lK2D0zeeBsr4ca/TLepQn5rQuj2HUPJR9cMnZMPKunCX+7iQiNP3ivygZf8AN67t2/9AXa+k6E983XBs45zPM6DW0t5hZkpTVMZXtM2fSKQRU/f4lWt65wvK975Ir4+j8fDKeL7qTO4HG6iKvQ2oG+nQ76eDe2nelhyUwbZV+WvqNPiJ3cE16HGKi6Vd9/sZNK/nWthO8sdHQE5ghn1LGfd8a0MX5D43IAO9YpCFIR+zviT72b5YfTzPNst97902/U3ZTC5+7RR/4TwhntrckMAxVGi6SQHK7+BH5xl6Bj4j0mVBNlTHsMCaRNaihYAfvVmF+gzsL+LsQku7HiEHd/vw8uFB4+A4Goem43B+W0kzr7LvPHaMw90IhO0ITrVfQY3mv6o+bdNUOFQBA1m6xkMNgp4iMyAzHNANOektQgHgLr4HFyr5IcqnK4/XAB0mGRzMywfMZk2/xYJk827xY8fGNJ7GxYA4n6lq1ctUs=) 2025-09-06 00:19:39.198806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLL+IAGM20WVzIM4TxthFhgINKYlROG/6SXzBljFKl4) 2025-09-06 00:19:39.198822 | orchestrator | 2025-09-06 00:19:39.198833 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:39.198842 | orchestrator | Saturday 06 September 2025 00:19:38 +0000 (0:00:01.028) 0:00:12.083 **** 2025-09-06 00:19:39.198858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMTRfnR9x483cQr4BkOvvnQ8DCqJPJeUzFPx8fwkChf05oOO+WsZ/5pSidySOQ1F85vVaDojbmHquf/oQl4Jrb0=) 2025-09-06 00:19:50.687174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCD/QmVjW1hCB0N1IF7FcSRUYgBR0baxwnvxwCOl9sZHNg4KwI4c4wRH+lF+njs2OZv2YVISbAnw/517+YLl3I3x/LmTCLoge+nKS+rTpiMhjBLN3aDBx71ifXitoVnnq55T+oMeIBgCaxkUVzK7HrLMSWvq3NNLY53XE36/1B7z5gnn+MOPakSJIsfQN8Pu+jdhRQoMx6TmN8IHhxiYhnpulWRaSCBPpHUMhWWDLlXwrHB0su5XGXE9KE7occiJdIPFmGAZCVgekELIcgnBLIXxE20+g9LGLAVTuhAIXHFYD0/CH3trqHZ1JMkVPXpasoADehaBo0bpJ94Soa+fejw6B8rb24CdrJl9m2Z7YsrrqAW3+NvtqYGXc3GvR5sAtFxmKxmANMdrt9ePMAbSxt1V9N932bGwPm4jea4SCslleXhEmI+nv/o+wtJLuY5RDQixAOtVWkj/zrVXxL92MIzjkYDmlN8UC9jTvpVsM5OEwuoke9W4Lr1e0hd0Zoqd90=) 2025-09-06 00:19:50.687302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuWZl8kB17RmGU+RMyrwL6PC9lioEjF979meBk4C9Sk) 2025-09-06 00:19:50.687321 | orchestrator | 2025-09-06 00:19:50.687334 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:50.687346 | orchestrator | Saturday 06 September 2025 00:19:39 +0000 (0:00:01.054) 0:00:13.138 **** 2025-09-06 00:19:50.687358 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL68wtVZpzwvjGEYh4QuQhFFOHk0ljJR/wqcUbGpj4gSvAogEkcsMGX/dPd4VRpyfnVk/1ikLd3iptO1+pHavxA=) 2025-09-06 00:19:50.687371 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAha6xaAES4gjfJPkAbgvdOGxN+jeQmiEiTCfClbTWl) 2025-09-06 00:19:50.687383 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMCUXANg1Mncx8gpa/V62Iy+/Uks2G7uUzHwr2X46NNo5D6y6Gt7067bfiaA0ZIAdO7ldFViDWO1lzKCW1ZVDXQCbv3NhSC/VLHpaatPP0rX35z5zXBNfETPwm5aoT0I3zcMj+4HoOOoSLRXPIFEK1/s7r94yAHUE5iuKa0BtTnU7U5e8nDd0sFAVSEcVboyS1orGlz8y2vd0ZpzSMz+bvPe49CbIrsnTs+apbYiHadNsV/qR1re06jXbY15Sb2fBqdkFofBQRBC8yWGJz2+RoShVDhpNRv7yF/wdn1xuhqGoh0RrIcQJzxNoGEzJcQdY2Yd60kPAt0O7lUll5/NJW6CZF2QnTUYDGxe42Ru6CQV480HidPVutYO2Ss/5DKcNLC1jKERnNsWm3rFzacNDottCOvD6gfmYLGrP+KhUfLKipnLP3+b2/NmeUlJ0aws+bJ4rwPwhfRs2Dk/4ZJMP8rEj+9W+R4LNOV8z3pN35JZoyTvG5d06uSMci+KX0hgs=) 2025-09-06 00:19:50.687395 | orchestrator | 2025-09-06 00:19:50.687406 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-06 00:19:50.687418 | orchestrator | Saturday 06 September 2025 00:19:40 +0000 (0:00:01.043) 0:00:14.181 **** 2025-09-06 00:19:50.687430 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-06 00:19:50.687441 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-06 00:19:50.687451 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-06 00:19:50.687462 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-06 00:19:50.687473 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-06 00:19:50.687483 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-06 00:19:50.687494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-06 00:19:50.687505 | orchestrator | 2025-09-06 00:19:50.687516 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-06 00:19:50.687527 | orchestrator | Saturday 06 September 2025 00:19:45 +0000 (0:00:05.067) 0:00:19.249 **** 2025-09-06 00:19:50.687558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-06 00:19:50.687571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-06 00:19:50.687605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-06 00:19:50.687616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-06 00:19:50.687671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-06 00:19:50.687688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-06 00:19:50.687700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-06 00:19:50.687710 | orchestrator | 2025-09-06 00:19:50.687738 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:50.687750 | orchestrator | Saturday 06 September 2025 00:19:45 +0000 (0:00:00.165) 0:00:19.414 **** 2025-09-06 00:19:50.687761 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0PZQ1itfuHNGARGxhcbmsAoA2FftVUyBCcJrcxtZGL) 2025-09-06 00:19:50.687774 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9kNep5dM1XALTPZG/Ngpcl2XJxRUToTyLHInvFlUgevy7HNhCpYfMEQGYcaJHb3l/8dpNl2Wbt8WQOb28vfxkzERd2Tp3i4ZLgyuCDCjEiX0DaOjeZx2tSLI5mymvgSvmL/T3ykwo8wLGLbc++P+RmutUfB0O/wmI+MChG730sByyjJ6Cuj8JCHBkbz0RyK6geUF6rnQ0sIog6XNNWhvdM9qDFNKmYdTlDCCEQPvuBRSlPOte6CBX+vLx/jfsophOj4KGL/eTy/rbLqFmpDdDp3iiH4XKj3gTFOp1q6hjrNVKnovFtAJ12hpdCZ/8ANs0bxXfFr8WKW7g7vfgTa2sAU1vhzljgsbNJOuu8jihDzsArrPBcvxevWxDR1w8D0j35Ds7nxpB3ZWOZ064PtlXuWzuAm71Vmni4M4Zjw7GwIwJsYZ3kjTtSmnOb0g0ZmVecg633p9SHFeuefi+EeNl7p+/yUs7p4xDhOBHONnoCulr34jfqaGtlGO4gw0QPKE=) 2025-09-06 00:19:50.687787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEUAj3YdektqxymTUh6v43G1WFKHEJIGoPSXXH2fHIZA6aRx8IICiDhYWK68mkeGwP4wMF6l1Vqu63aY9MsOUsE=) 2025-09-06 00:19:50.687798 | orchestrator | 2025-09-06 00:19:50.687809 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:50.687820 | orchestrator | Saturday 06 September 2025 00:19:47 +0000 (0:00:02.043) 0:00:21.458 **** 2025-09-06 00:19:50.687831 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOEC/F5yJ7QUBrejHKFG9pWIBdH4EFuVC8G06AarLm5H9diT3UDnj+EDmdX168F+mcG92zYUi3lH07oubjrs3pw=) 2025-09-06 00:19:50.687843 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTxx5EMDRQrIBDVjtitZYdfnRv6dS9Zzo4qu9wzWh8V7Cjb66xo6eblV7Z8hfdSjxULknWgRKS3hZCfD0RglM/Eb37aU7+5x3+lTH0Ywc3BrHmcRxfy8/ekpJ4pihsi0+RM6ossQDRZ+82AXWtdOfT9gx2hoTG+biFILgWP97tzJPJbhZsavfaV0c48bBYaZsZ8WPOXpwtkGUxI7PsXdFgvO++d709+JaXlgPo5tfNacMqN13dAIlBZz8y88WCBwMc7JDWVcR6F9J4RmmloYyIDTShu8OCwhI8a6bL3Ng9reYI4t7Ug8GfdjEI4ob1RgmTLdcIutzHelZAubEtLdI4mqLSu73sBhG1tJ+TjHdqOdoNkwkkFV4QrEwvOy7SiYxOsEGr2WDBL24Kxl3yJysPjw7fKxYgHSTJ6/SUB4dcIxLrMX+c15UfX1ao7JNjRx8EVrc7orGfJ+m/x/NZme0P5uz7EWGoHAziREia/H8Uo900QfXBwZTHg+NMdddIK2c=) 2025-09-06 00:19:50.687854 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFEGbIgSrIXEoKvJxHGMhgHwchv7sqtLrtAIMPppAnJ/) 2025-09-06 00:19:50.687865 | orchestrator | 2025-09-06 00:19:50.687876 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:50.687887 | orchestrator | Saturday 06 September 2025 00:19:48 +0000 (0:00:01.034) 0:00:22.492 **** 2025-09-06 00:19:50.687906 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAEuwypAKkalj8VHpsHl1C+j8vDyFgY5hb9KvFNkRATY) 2025-09-06 00:19:50.687918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPKcaLSD3zxlxigHlXXK4ePmQXwVVEy1DPF0iwjrudOdd4o/hK6SENfkcTa7DvuhsGX41LDrrv8b8uNkKO9qif8KRn+D4nFO9qKgTLkB5qw+74SxyST0qavEkpPGADXglgo902pA8rcy0FlaFCbX0B7DWQvLzaroqfcPwz4atPBJeCOH3cCQB6RpmfqJwYZ8y+udl8DGYrIpHS2om14+4RS7n4WhZZRPg8aAwpatWh/XlLccgZcHozqEMqRm1uBf8o1Z2rUTzHckO1WFeIRWXtw9NFti6F1Cbrzadk2mrKIzEPzhbyq7+k+js4FQ2xTM2hFfXi/ckzFh7Wq6G2mGFJDeRGDfKew5gGb+ZWGwGHqof1kRCmTqxPL5WYgNXzUf59QFvWwjnZH/qo5wKk424C+YI3L7aoY2nHMbIJ1jgBGlOhQVUq5i0O26wxyUhono2rz7Dr8qnDhKzD9wDEdm4Imdv0+2iZuqFY4w+eesSI1AIf7daRpTOOGN4T2h70RoE=) 2025-09-06 00:19:50.687930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGn1EIkstSaWGu9OMYWhyU/Iv7oXFcFkEEfCUqVbLZQ9+ALJ/DhLSt5X6o8bqBlDUTC8q9QDWNEnzbgYKGc4dNQ=) 2025-09-06 00:19:50.687941 | orchestrator | 2025-09-06 00:19:50.687951 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:50.687963 | orchestrator | Saturday 06 September 2025 00:19:49 +0000 (0:00:01.065) 0:00:23.558 **** 2025-09-06 00:19:50.687994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQj8KJxiRfFKwS29Y8EIfU0elP4MxCQ5aPbqqlFUY/vSlhT0Vfsnh6jLS+WwBNerjMdWybbQrsUcf5oir/1ltU=) 2025-09-06 00:19:50.688029 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGuohX7pBOi/71of6Qs9rUaZnhBLNFNiiarLomFpjCgQrGACs3e/ovPVZrPxG/h9tINFYtazpKCG+/ie87XkobPWTWn8h6R2Iwxlj+Pv5rtw4xAlQwv1NwfbXag+SAHe6TE9cFSv0KnCaP52Fu9mJn72TcwLzjdY4N4nRweAgxsVL83wOj+HPV9IgbMkg4qB4em7awQN5ddSJk59Ke67BPsUEktHlYAeOM6YD/L2juRcrbLRLCqgyAzPiKcpO+IzN66/Amo15RhIlquj7yxcYIIfssIRLTPyDeKWQ14pVqZd6iyH/fjhBfaNxcPeFo9BU61nuMkqRHqPlXb3bQXpjL0r5RNe+DnzhM9jTJsalDiyUVAX7lRPuDBd0/+Gb88N26RNslxwGyQQvuo6XNtNRJuy7LhKscUCzvpCy2jgRnbFAvOJwhQdOhu06y76wRzjqWijkC2FU9jz0COjw4xHdJ61+0ysy9FGrF44+7zEUGTX0kSwQKeRicOuaxNwSvQTM=) 2025-09-06 00:19:54.852945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL9Cqiyg7I694WzqJwDNJo09RxU7ZB+Y5zNWGMbdWAdS) 2025-09-06 00:19:54.853114 | orchestrator | 2025-09-06 00:19:54.853146 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:54.853168 | orchestrator | Saturday 06 September 2025 00:19:50 +0000 (0:00:01.066) 0:00:24.624 **** 2025-09-06 00:19:54.853189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRCH6O7KjJ4OGML3yc7av1B/lK2D0zeeBsr4ca/TLepQn5rQuj2HUPJR9cMnZMPKunCX+7iQiNP3ivygZf8AN67t2/9AXa+k6E983XBs45zPM6DW0t5hZkpTVMZXtM2fSKQRU/f4lWt65wvK975Ir4+j8fDKeL7qTO4HG6iKvQ2oG+nQ76eDe2nelhyUwbZV+WvqNPiJ3cE16HGKi6Vd9/sZNK/nWthO8sdHQE5ghn1LGfd8a0MX5D43IAO9YpCFIR+zviT72b5YfTzPNst97902/U3ZTC5+7RR/4TwhntrckMAxVGi6SQHK7+BH5xl6Bj4j0mVBNlTHsMCaRNaihYAfvVmF+gzsL+LsQku7HiEHd/vw8uFB4+A4Goem43B+W0kzr7LvPHaMw90IhO0ITrVfQY3mv6o+bdNUOFQBA1m6xkMNgp4iMyAzHNANOektQgHgLr4HFyr5IcqnK4/XAB0mGRzMywfMZk2/xYJk827xY8fGNJ7GxYA4n6lq1ctUs=) 2025-09-06 00:19:54.853214 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI7tiN7AaGCSaqvcG6BnGPvm9I6NhfSZfiXG3EQGT0nIdukFDAX85jExqH8vhw4qfYYXW98tAizsti4NydR4IcE=) 2025-09-06 00:19:54.853235 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLL+IAGM20WVzIM4TxthFhgINKYlROG/6SXzBljFKl4) 2025-09-06 00:19:54.853253 | orchestrator | 2025-09-06 00:19:54.853272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:54.853289 | orchestrator | Saturday 06 September 2025 00:19:51 +0000 (0:00:01.048) 0:00:25.673 **** 2025-09-06 00:19:54.853306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuWZl8kB17RmGU+RMyrwL6PC9lioEjF979meBk4C9Sk) 2025-09-06 00:19:54.853359 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCD/QmVjW1hCB0N1IF7FcSRUYgBR0baxwnvxwCOl9sZHNg4KwI4c4wRH+lF+njs2OZv2YVISbAnw/517+YLl3I3x/LmTCLoge+nKS+rTpiMhjBLN3aDBx71ifXitoVnnq55T+oMeIBgCaxkUVzK7HrLMSWvq3NNLY53XE36/1B7z5gnn+MOPakSJIsfQN8Pu+jdhRQoMx6TmN8IHhxiYhnpulWRaSCBPpHUMhWWDLlXwrHB0su5XGXE9KE7occiJdIPFmGAZCVgekELIcgnBLIXxE20+g9LGLAVTuhAIXHFYD0/CH3trqHZ1JMkVPXpasoADehaBo0bpJ94Soa+fejw6B8rb24CdrJl9m2Z7YsrrqAW3+NvtqYGXc3GvR5sAtFxmKxmANMdrt9ePMAbSxt1V9N932bGwPm4jea4SCslleXhEmI+nv/o+wtJLuY5RDQixAOtVWkj/zrVXxL92MIzjkYDmlN8UC9jTvpVsM5OEwuoke9W4Lr1e0hd0Zoqd90=) 2025-09-06 00:19:54.853380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMTRfnR9x483cQr4BkOvvnQ8DCqJPJeUzFPx8fwkChf05oOO+WsZ/5pSidySOQ1F85vVaDojbmHquf/oQl4Jrb0=) 2025-09-06 00:19:54.853400 | orchestrator | 2025-09-06 00:19:54.853419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-06 00:19:54.853438 | orchestrator | Saturday 06 September 2025 00:19:52 +0000 (0:00:01.040) 0:00:26.714 **** 2025-09-06 00:19:54.853457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMCUXANg1Mncx8gpa/V62Iy+/Uks2G7uUzHwr2X46NNo5D6y6Gt7067bfiaA0ZIAdO7ldFViDWO1lzKCW1ZVDXQCbv3NhSC/VLHpaatPP0rX35z5zXBNfETPwm5aoT0I3zcMj+4HoOOoSLRXPIFEK1/s7r94yAHUE5iuKa0BtTnU7U5e8nDd0sFAVSEcVboyS1orGlz8y2vd0ZpzSMz+bvPe49CbIrsnTs+apbYiHadNsV/qR1re06jXbY15Sb2fBqdkFofBQRBC8yWGJz2+RoShVDhpNRv7yF/wdn1xuhqGoh0RrIcQJzxNoGEzJcQdY2Yd60kPAt0O7lUll5/NJW6CZF2QnTUYDGxe42Ru6CQV480HidPVutYO2Ss/5DKcNLC1jKERnNsWm3rFzacNDottCOvD6gfmYLGrP+KhUfLKipnLP3+b2/NmeUlJ0aws+bJ4rwPwhfRs2Dk/4ZJMP8rEj+9W+R4LNOV8z3pN35JZoyTvG5d06uSMci+KX0hgs=) 2025-09-06 00:19:54.853478 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL68wtVZpzwvjGEYh4QuQhFFOHk0ljJR/wqcUbGpj4gSvAogEkcsMGX/dPd4VRpyfnVk/1ikLd3iptO1+pHavxA=) 2025-09-06 00:19:54.853499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAha6xaAES4gjfJPkAbgvdOGxN+jeQmiEiTCfClbTWl) 2025-09-06 00:19:54.853517 | orchestrator | 2025-09-06 00:19:54.853535 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-06 00:19:54.853554 | orchestrator | Saturday 06 September 2025 00:19:53 +0000 (0:00:01.072) 0:00:27.786 **** 2025-09-06 00:19:54.853572 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-06 00:19:54.853591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-06 00:19:54.853609 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-06 00:19:54.853628 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-06 00:19:54.853647 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-06 00:19:54.853690 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-06 00:19:54.853708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-06 00:19:54.853728 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:19:54.853748 | orchestrator | 2025-09-06 00:19:54.853767 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-06 00:19:54.853785 | orchestrator | Saturday 06 September 2025 00:19:53 +0000 (0:00:00.151) 0:00:27.937 **** 2025-09-06 00:19:54.853804 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:19:54.853822 | orchestrator | 2025-09-06 00:19:54.853840 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-06 00:19:54.853859 | orchestrator | Saturday 06 September 2025 00:19:54 +0000 (0:00:00.069) 0:00:28.007 **** 2025-09-06 00:19:54.853875 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:19:54.853892 | orchestrator | 2025-09-06 00:19:54.853910 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-06 00:19:54.853928 | orchestrator | Saturday 06 September 2025 00:19:54 +0000 (0:00:00.064) 0:00:28.072 **** 2025-09-06 00:19:54.854127 | orchestrator | changed: [testbed-manager] 2025-09-06 00:19:54.854153 | orchestrator | 2025-09-06 00:19:54.854172 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:19:54.854190 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:19:54.854210 | orchestrator | 2025-09-06 00:19:54.854228 | orchestrator | 2025-09-06 00:19:54.854247 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:19:54.854265 | orchestrator | Saturday 06 September 2025 00:19:54 +0000 (0:00:00.499) 0:00:28.572 **** 2025-09-06 00:19:54.854281 | orchestrator | =============================================================================== 2025-09-06 00:19:54.854299 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.67s 2025-09-06 00:19:54.854318 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.07s 2025-09-06 00:19:54.854338 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.04s 2025-09-06 00:19:54.854356 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.03s 2025-09-06 00:19:54.854374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-06 00:19:54.854391 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-06 00:19:54.854428 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-06 00:19:54.854448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-06 00:19:54.854465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-06 00:19:54.854484 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-06 00:19:54.854502 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-06 00:19:54.854520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-06 00:19:54.854539 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-06 00:19:54.854557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-06 00:19:54.854575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-06 00:19:54.854593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-06 00:19:54.854611 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-09-06 00:19:54.854630 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-06 00:19:54.854649 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-06 00:19:54.854667 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2025-09-06 00:19:55.105488 | orchestrator | + osism apply squid 2025-09-06 00:20:07.005561 | orchestrator | 2025-09-06 00:20:07 | INFO  | Task 7a86d336-6136-4ed0-a31b-698dd820638b (squid) was prepared for execution. 2025-09-06 00:20:07.005662 | orchestrator | 2025-09-06 00:20:07 | INFO  | It takes a moment until task 7a86d336-6136-4ed0-a31b-698dd820638b (squid) has been started and output is visible here. 2025-09-06 00:21:58.656484 | orchestrator | 2025-09-06 00:21:58.656608 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-06 00:21:58.656625 | orchestrator | 2025-09-06 00:21:58.656637 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-06 00:21:58.656648 | orchestrator | Saturday 06 September 2025 00:20:10 +0000 (0:00:00.146) 0:00:00.146 **** 2025-09-06 00:21:58.656679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:21:58.656692 | orchestrator | 2025-09-06 00:21:58.656703 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-06 00:21:58.656740 | orchestrator | Saturday 06 September 2025 00:20:10 +0000 (0:00:00.075) 0:00:00.221 **** 2025-09-06 00:21:58.656803 | orchestrator | ok: [testbed-manager] 2025-09-06 00:21:58.656825 | orchestrator | 2025-09-06 00:21:58.656844 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-06 00:21:58.656856 | orchestrator | Saturday 06 September 2025 00:20:11 +0000 (0:00:01.166) 0:00:01.387 **** 2025-09-06 00:21:58.656868 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-06 00:21:58.656879 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-06 00:21:58.656889 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-06 00:21:58.656900 | orchestrator | 2025-09-06 00:21:58.656911 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-06 00:21:58.656922 | orchestrator | Saturday 06 September 2025 00:20:12 +0000 (0:00:01.001) 0:00:02.389 **** 2025-09-06 00:21:58.656932 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-06 00:21:58.656943 | orchestrator | 2025-09-06 00:21:58.656954 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-06 00:21:58.656965 | orchestrator | Saturday 06 September 2025 00:20:13 +0000 (0:00:01.063) 0:00:03.453 **** 2025-09-06 00:21:58.656976 | orchestrator | ok: [testbed-manager] 2025-09-06 00:21:58.656987 | orchestrator | 2025-09-06 00:21:58.656998 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-06 00:21:58.657009 | orchestrator | Saturday 06 September 2025 00:20:14 +0000 (0:00:00.359) 0:00:03.813 **** 2025-09-06 00:21:58.657021 | orchestrator | changed: [testbed-manager] 2025-09-06 00:21:58.657034 | orchestrator | 2025-09-06 00:21:58.657046 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-06 00:21:58.657058 | orchestrator | Saturday 06 September 2025 00:20:15 +0000 (0:00:00.902) 0:00:04.715 **** 2025-09-06 00:21:58.657071 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-06 00:21:58.657084 | orchestrator | ok: [testbed-manager] 2025-09-06 00:21:58.657096 | orchestrator | 2025-09-06 00:21:58.657107 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-06 00:21:58.657120 | orchestrator | Saturday 06 September 2025 00:20:45 +0000 (0:00:30.493) 0:00:35.209 **** 2025-09-06 00:21:58.657133 | orchestrator | changed: [testbed-manager] 2025-09-06 00:21:58.657146 | orchestrator | 2025-09-06 00:21:58.657158 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-06 00:21:58.657171 | orchestrator | Saturday 06 September 2025 00:20:57 +0000 (0:00:12.050) 0:00:47.260 **** 2025-09-06 00:21:58.657185 | orchestrator | Pausing for 60 seconds 2025-09-06 00:21:58.657199 | orchestrator | changed: [testbed-manager] 2025-09-06 00:21:58.657212 | orchestrator | 2025-09-06 00:21:58.657224 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-06 00:21:58.657236 | orchestrator | Saturday 06 September 2025 00:21:57 +0000 (0:01:00.081) 0:01:47.341 **** 2025-09-06 00:21:58.657248 | orchestrator | ok: [testbed-manager] 2025-09-06 00:21:58.657261 | orchestrator | 2025-09-06 00:21:58.657273 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-06 00:21:58.657286 | orchestrator | Saturday 06 September 2025 00:21:57 +0000 (0:00:00.059) 0:01:47.401 **** 2025-09-06 00:21:58.657298 | orchestrator | changed: [testbed-manager] 2025-09-06 00:21:58.657311 | orchestrator | 2025-09-06 00:21:58.657324 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:21:58.657337 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:21:58.657349 | orchestrator | 2025-09-06 00:21:58.657362 | orchestrator | 2025-09-06 00:21:58.657375 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:21:58.657388 | orchestrator | Saturday 06 September 2025 00:21:58 +0000 (0:00:00.637) 0:01:48.038 **** 2025-09-06 00:21:58.657408 | orchestrator | =============================================================================== 2025-09-06 00:21:58.657419 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-06 00:21:58.657430 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.49s 2025-09-06 00:21:58.657441 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.05s 2025-09-06 00:21:58.657451 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.17s 2025-09-06 00:21:58.657462 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-09-06 00:21:58.657473 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2025-09-06 00:21:58.657484 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-09-06 00:21:58.657495 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-09-06 00:21:58.657505 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-09-06 00:21:58.657516 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-06 00:21:58.657527 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-06 00:21:58.907610 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-06 00:21:58.907936 | orchestrator | ++ semver latest 9.0.0 2025-09-06 00:21:58.952428 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-06 00:21:58.952500 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-06 00:21:58.953487 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-06 00:22:10.960811 | orchestrator | 2025-09-06 00:22:10 | INFO  | Task 3ada24a1-96f7-4bfd-9595-2daba3d659f6 (operator) was prepared for execution. 2025-09-06 00:22:10.960929 | orchestrator | 2025-09-06 00:22:10 | INFO  | It takes a moment until task 3ada24a1-96f7-4bfd-9595-2daba3d659f6 (operator) has been started and output is visible here. 2025-09-06 00:22:26.421852 | orchestrator | 2025-09-06 00:22:26.421977 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-06 00:22:26.421995 | orchestrator | 2025-09-06 00:22:26.422008 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-06 00:22:26.422079 | orchestrator | Saturday 06 September 2025 00:22:14 +0000 (0:00:00.147) 0:00:00.147 **** 2025-09-06 00:22:26.422093 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:22:26.422105 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:22:26.422116 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:22:26.422127 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:22:26.422138 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:22:26.422149 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:22:26.422160 | orchestrator | 2025-09-06 00:22:26.422171 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-06 00:22:26.422182 | orchestrator | Saturday 06 September 2025 00:22:18 +0000 (0:00:03.564) 0:00:03.711 **** 2025-09-06 00:22:26.422214 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:22:26.422226 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:22:26.422238 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:22:26.422250 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:22:26.422260 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:22:26.422271 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:22:26.422283 | orchestrator | 2025-09-06 00:22:26.422294 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-06 00:22:26.422305 | orchestrator | 2025-09-06 00:22:26.422316 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-06 00:22:26.422327 | orchestrator | Saturday 06 September 2025 00:22:19 +0000 (0:00:00.725) 0:00:04.436 **** 2025-09-06 00:22:26.422339 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:22:26.422352 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:22:26.422365 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:22:26.422377 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:22:26.422390 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:22:26.422402 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:22:26.422435 | orchestrator | 2025-09-06 00:22:26.422449 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-06 00:22:26.422462 | orchestrator | Saturday 06 September 2025 00:22:19 +0000 (0:00:00.162) 0:00:04.599 **** 2025-09-06 00:22:26.422474 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:22:26.422487 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:22:26.422499 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:22:26.422513 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:22:26.422525 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:22:26.422537 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:22:26.422550 | orchestrator | 2025-09-06 00:22:26.422564 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-06 00:22:26.422576 | orchestrator | Saturday 06 September 2025 00:22:19 +0000 (0:00:00.159) 0:00:04.758 **** 2025-09-06 00:22:26.422590 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:26.422602 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:26.422615 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:26.422628 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:26.422641 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:26.422655 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:26.422667 | orchestrator | 2025-09-06 00:22:26.422681 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-06 00:22:26.422694 | orchestrator | Saturday 06 September 2025 00:22:20 +0000 (0:00:00.650) 0:00:05.409 **** 2025-09-06 00:22:26.422706 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:26.422717 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:26.422750 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:26.422761 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:26.422772 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:26.422783 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:26.422794 | orchestrator | 2025-09-06 00:22:26.422805 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-06 00:22:26.422816 | orchestrator | Saturday 06 September 2025 00:22:20 +0000 (0:00:00.761) 0:00:06.170 **** 2025-09-06 00:22:26.422827 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-06 00:22:26.422838 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-06 00:22:26.422849 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-06 00:22:26.422859 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-06 00:22:26.422870 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-06 00:22:26.422881 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-06 00:22:26.422892 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-06 00:22:26.422902 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-06 00:22:26.422913 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-06 00:22:26.422924 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-06 00:22:26.422935 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-06 00:22:26.422946 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-06 00:22:26.422956 | orchestrator | 2025-09-06 00:22:26.422967 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-06 00:22:26.422978 | orchestrator | Saturday 06 September 2025 00:22:21 +0000 (0:00:01.118) 0:00:07.289 **** 2025-09-06 00:22:26.422989 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:26.423000 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:26.423011 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:26.423022 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:26.423032 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:26.423043 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:26.423054 | orchestrator | 2025-09-06 00:22:26.423065 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-06 00:22:26.423077 | orchestrator | Saturday 06 September 2025 00:22:23 +0000 (0:00:01.249) 0:00:08.539 **** 2025-09-06 00:22:26.423088 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-06 00:22:26.423107 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-06 00:22:26.423118 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-06 00:22:26.423129 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423158 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423170 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423181 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423191 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423202 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-06 00:22:26.423213 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423223 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423234 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423245 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423255 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423266 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-06 00:22:26.423276 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423287 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423297 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423308 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423319 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423329 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-06 00:22:26.423339 | orchestrator | 2025-09-06 00:22:26.423350 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-06 00:22:26.423362 | orchestrator | Saturday 06 September 2025 00:22:24 +0000 (0:00:01.286) 0:00:09.825 **** 2025-09-06 00:22:26.423373 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:26.423383 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:26.423394 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:26.423404 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:26.423415 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:26.423425 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:26.423436 | orchestrator | 2025-09-06 00:22:26.423447 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-06 00:22:26.423457 | orchestrator | Saturday 06 September 2025 00:22:24 +0000 (0:00:00.134) 0:00:09.959 **** 2025-09-06 00:22:26.423468 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:26.423478 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:26.423489 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:26.423499 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:26.423510 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:26.423520 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:26.423531 | orchestrator | 2025-09-06 00:22:26.423542 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-06 00:22:26.423552 | orchestrator | Saturday 06 September 2025 00:22:25 +0000 (0:00:00.548) 0:00:10.508 **** 2025-09-06 00:22:26.423563 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:26.423574 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:26.423584 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:26.423595 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:26.423605 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:26.423616 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:26.423626 | orchestrator | 2025-09-06 00:22:26.423637 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-06 00:22:26.423655 | orchestrator | Saturday 06 September 2025 00:22:25 +0000 (0:00:00.143) 0:00:10.652 **** 2025-09-06 00:22:26.423666 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:22:26.423681 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 00:22:26.423692 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:26.423702 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:26.423713 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 00:22:26.423739 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:26.423750 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 00:22:26.423761 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:26.423771 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-06 00:22:26.423782 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:26.423793 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-06 00:22:26.423803 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:26.423814 | orchestrator | 2025-09-06 00:22:26.423825 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-06 00:22:26.423836 | orchestrator | Saturday 06 September 2025 00:22:25 +0000 (0:00:00.705) 0:00:11.358 **** 2025-09-06 00:22:26.423846 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:26.423857 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:26.423867 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:26.423878 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:26.423889 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:26.423908 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:26.423926 | orchestrator | 2025-09-06 00:22:26.423945 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-06 00:22:26.423963 | orchestrator | Saturday 06 September 2025 00:22:26 +0000 (0:00:00.132) 0:00:11.490 **** 2025-09-06 00:22:26.423980 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:26.423998 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:26.424016 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:26.424033 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:26.424050 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:26.424069 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:26.424087 | orchestrator | 2025-09-06 00:22:26.424107 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-06 00:22:26.424139 | orchestrator | Saturday 06 September 2025 00:22:26 +0000 (0:00:00.158) 0:00:11.648 **** 2025-09-06 00:22:26.424160 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:26.424172 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:26.424182 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:26.424193 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:26.424214 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:27.445786 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:27.445883 | orchestrator | 2025-09-06 00:22:27.445897 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-06 00:22:27.445910 | orchestrator | Saturday 06 September 2025 00:22:26 +0000 (0:00:00.139) 0:00:11.788 **** 2025-09-06 00:22:27.445921 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:22:27.445932 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:22:27.445942 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:22:27.445953 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:22:27.445963 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:22:27.445974 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:22:27.445985 | orchestrator | 2025-09-06 00:22:27.445996 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-06 00:22:27.446007 | orchestrator | Saturday 06 September 2025 00:22:27 +0000 (0:00:00.628) 0:00:12.416 **** 2025-09-06 00:22:27.446061 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:22:27.446074 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:22:27.446084 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:22:27.446118 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:22:27.446129 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:22:27.446140 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:22:27.446151 | orchestrator | 2025-09-06 00:22:27.446162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:22:27.446174 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446188 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446198 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446209 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446220 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446230 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:22:27.446241 | orchestrator | 2025-09-06 00:22:27.446251 | orchestrator | 2025-09-06 00:22:27.446262 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:22:27.446273 | orchestrator | Saturday 06 September 2025 00:22:27 +0000 (0:00:00.193) 0:00:12.610 **** 2025-09-06 00:22:27.446286 | orchestrator | =============================================================================== 2025-09-06 00:22:27.446299 | orchestrator | Gathering Facts --------------------------------------------------------- 3.56s 2025-09-06 00:22:27.446312 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-09-06 00:22:27.446326 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-09-06 00:22:27.446337 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-09-06 00:22:27.446350 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-09-06 00:22:27.446362 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-09-06 00:22:27.446374 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-09-06 00:22:27.446385 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-09-06 00:22:27.446397 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-09-06 00:22:27.446410 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2025-09-06 00:22:27.446423 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.19s 2025-09-06 00:22:27.446436 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-06 00:22:27.446448 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-06 00:22:27.446460 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-06 00:22:27.446472 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.14s 2025-09-06 00:22:27.446484 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-09-06 00:22:27.446496 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2025-09-06 00:22:27.446508 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-09-06 00:22:27.695697 | orchestrator | + osism apply --environment custom facts 2025-09-06 00:22:29.531814 | orchestrator | 2025-09-06 00:22:29 | INFO  | Trying to run play facts in environment custom 2025-09-06 00:22:39.629360 | orchestrator | 2025-09-06 00:22:39 | INFO  | Task 1d559bce-0306-4357-9bee-ac962b958f9c (facts) was prepared for execution. 2025-09-06 00:22:39.629535 | orchestrator | 2025-09-06 00:22:39 | INFO  | It takes a moment until task 1d559bce-0306-4357-9bee-ac962b958f9c (facts) has been started and output is visible here. 2025-09-06 00:23:21.817600 | orchestrator | 2025-09-06 00:23:21.817760 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-06 00:23:21.817778 | orchestrator | 2025-09-06 00:23:21.817789 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-06 00:23:21.817800 | orchestrator | Saturday 06 September 2025 00:22:43 +0000 (0:00:00.063) 0:00:00.063 **** 2025-09-06 00:23:21.817810 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:21.817821 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.817831 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:23:21.817841 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.817850 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.817860 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:23:21.817869 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:23:21.817879 | orchestrator | 2025-09-06 00:23:21.817889 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-06 00:23:21.817899 | orchestrator | Saturday 06 September 2025 00:22:44 +0000 (0:00:01.298) 0:00:01.361 **** 2025-09-06 00:23:21.817909 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:21.817918 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.817928 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:23:21.817937 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.817947 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:23:21.817956 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.817966 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:23:21.817975 | orchestrator | 2025-09-06 00:23:21.817985 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-06 00:23:21.817994 | orchestrator | 2025-09-06 00:23:21.818004 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-06 00:23:21.818014 | orchestrator | Saturday 06 September 2025 00:22:45 +0000 (0:00:01.075) 0:00:02.437 **** 2025-09-06 00:23:21.818078 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818087 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818097 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818106 | orchestrator | 2025-09-06 00:23:21.818116 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-06 00:23:21.818126 | orchestrator | Saturday 06 September 2025 00:22:45 +0000 (0:00:00.095) 0:00:02.532 **** 2025-09-06 00:23:21.818138 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818149 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818161 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818171 | orchestrator | 2025-09-06 00:23:21.818182 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-06 00:23:21.818193 | orchestrator | Saturday 06 September 2025 00:22:45 +0000 (0:00:00.172) 0:00:02.705 **** 2025-09-06 00:23:21.818206 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818222 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818239 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818257 | orchestrator | 2025-09-06 00:23:21.818269 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-06 00:23:21.818282 | orchestrator | Saturday 06 September 2025 00:22:46 +0000 (0:00:00.161) 0:00:02.866 **** 2025-09-06 00:23:21.818295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:23:21.818307 | orchestrator | 2025-09-06 00:23:21.818319 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-06 00:23:21.818331 | orchestrator | Saturday 06 September 2025 00:22:46 +0000 (0:00:00.102) 0:00:02.968 **** 2025-09-06 00:23:21.818367 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818379 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818390 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818400 | orchestrator | 2025-09-06 00:23:21.818412 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-06 00:23:21.818423 | orchestrator | Saturday 06 September 2025 00:22:46 +0000 (0:00:00.389) 0:00:03.358 **** 2025-09-06 00:23:21.818434 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:23:21.818445 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:23:21.818456 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:23:21.818468 | orchestrator | 2025-09-06 00:23:21.818479 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-06 00:23:21.818490 | orchestrator | Saturday 06 September 2025 00:22:46 +0000 (0:00:00.075) 0:00:03.433 **** 2025-09-06 00:23:21.818500 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.818509 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.818518 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.818528 | orchestrator | 2025-09-06 00:23:21.818537 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-06 00:23:21.818547 | orchestrator | Saturday 06 September 2025 00:22:47 +0000 (0:00:00.948) 0:00:04.382 **** 2025-09-06 00:23:21.818556 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818566 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818575 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818585 | orchestrator | 2025-09-06 00:23:21.818594 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-06 00:23:21.818604 | orchestrator | Saturday 06 September 2025 00:22:47 +0000 (0:00:00.428) 0:00:04.810 **** 2025-09-06 00:23:21.818613 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.818623 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.818632 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.818642 | orchestrator | 2025-09-06 00:23:21.818651 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-06 00:23:21.818661 | orchestrator | Saturday 06 September 2025 00:22:48 +0000 (0:00:00.973) 0:00:05.784 **** 2025-09-06 00:23:21.818670 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.818699 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.818709 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.818719 | orchestrator | 2025-09-06 00:23:21.818728 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-06 00:23:21.818738 | orchestrator | Saturday 06 September 2025 00:23:05 +0000 (0:00:16.720) 0:00:22.505 **** 2025-09-06 00:23:21.818747 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:23:21.818757 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:23:21.818767 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:23:21.818776 | orchestrator | 2025-09-06 00:23:21.818802 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-06 00:23:21.818829 | orchestrator | Saturday 06 September 2025 00:23:05 +0000 (0:00:00.097) 0:00:22.603 **** 2025-09-06 00:23:21.818840 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:21.818849 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:21.818859 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:21.818868 | orchestrator | 2025-09-06 00:23:21.818877 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-06 00:23:21.818900 | orchestrator | Saturday 06 September 2025 00:23:12 +0000 (0:00:07.135) 0:00:29.738 **** 2025-09-06 00:23:21.818910 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.818920 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.818939 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.818949 | orchestrator | 2025-09-06 00:23:21.818959 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-06 00:23:21.818969 | orchestrator | Saturday 06 September 2025 00:23:13 +0000 (0:00:00.432) 0:00:30.171 **** 2025-09-06 00:23:21.818978 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-06 00:23:21.818988 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-06 00:23:21.819004 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-06 00:23:21.819014 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-06 00:23:21.819023 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-06 00:23:21.819032 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-06 00:23:21.819042 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-06 00:23:21.819051 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-06 00:23:21.819061 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-06 00:23:21.819070 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-06 00:23:21.819080 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-06 00:23:21.819089 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-06 00:23:21.819098 | orchestrator | 2025-09-06 00:23:21.819108 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-06 00:23:21.819118 | orchestrator | Saturday 06 September 2025 00:23:16 +0000 (0:00:03.362) 0:00:33.533 **** 2025-09-06 00:23:21.819127 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.819137 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.819146 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.819156 | orchestrator | 2025-09-06 00:23:21.819165 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-06 00:23:21.819175 | orchestrator | 2025-09-06 00:23:21.819184 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:23:21.819194 | orchestrator | Saturday 06 September 2025 00:23:17 +0000 (0:00:01.187) 0:00:34.720 **** 2025-09-06 00:23:21.819204 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:23:21.819213 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:23:21.819222 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:23:21.819232 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:21.819241 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:21.819251 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:21.819260 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:21.819270 | orchestrator | 2025-09-06 00:23:21.819279 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:23:21.819290 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:23:21.819301 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:23:21.819312 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:23:21.819322 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:23:21.819331 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:23:21.819341 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:23:21.819351 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:23:21.819360 | orchestrator | 2025-09-06 00:23:21.819370 | orchestrator | 2025-09-06 00:23:21.819379 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:23:21.819389 | orchestrator | Saturday 06 September 2025 00:23:21 +0000 (0:00:03.919) 0:00:38.640 **** 2025-09-06 00:23:21.819399 | orchestrator | =============================================================================== 2025-09-06 00:23:21.819414 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.72s 2025-09-06 00:23:21.819423 | orchestrator | Install required packages (Debian) -------------------------------------- 7.14s 2025-09-06 00:23:21.819433 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.92s 2025-09-06 00:23:21.819442 | orchestrator | Copy fact files --------------------------------------------------------- 3.36s 2025-09-06 00:23:21.819456 | orchestrator | Create custom facts directory ------------------------------------------- 1.30s 2025-09-06 00:23:21.819466 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2025-09-06 00:23:21.819482 | orchestrator | Copy fact file ---------------------------------------------------------- 1.08s 2025-09-06 00:23:22.031653 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.97s 2025-09-06 00:23:22.031776 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.95s 2025-09-06 00:23:22.031788 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-09-06 00:23:22.031798 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-09-06 00:23:22.031808 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.39s 2025-09-06 00:23:22.031818 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2025-09-06 00:23:22.031827 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2025-09-06 00:23:22.031837 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2025-09-06 00:23:22.031847 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-06 00:23:22.031857 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-06 00:23:22.031866 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.08s 2025-09-06 00:23:22.294980 | orchestrator | + osism apply bootstrap 2025-09-06 00:23:34.354507 | orchestrator | 2025-09-06 00:23:34 | INFO  | Task e9584028-0a84-4862-a2c1-712c05a8343c (bootstrap) was prepared for execution. 2025-09-06 00:23:34.354621 | orchestrator | 2025-09-06 00:23:34 | INFO  | It takes a moment until task e9584028-0a84-4862-a2c1-712c05a8343c (bootstrap) has been started and output is visible here. 2025-09-06 00:23:49.151649 | orchestrator | 2025-09-06 00:23:49.151812 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-06 00:23:49.151825 | orchestrator | 2025-09-06 00:23:49.151835 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-06 00:23:49.151845 | orchestrator | Saturday 06 September 2025 00:23:38 +0000 (0:00:00.120) 0:00:00.120 **** 2025-09-06 00:23:49.151854 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:49.151863 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:23:49.151872 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:23:49.151881 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:23:49.151890 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:49.151898 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:49.151907 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:49.151916 | orchestrator | 2025-09-06 00:23:49.151924 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-06 00:23:49.151933 | orchestrator | 2025-09-06 00:23:49.151942 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:23:49.151950 | orchestrator | Saturday 06 September 2025 00:23:38 +0000 (0:00:00.163) 0:00:00.283 **** 2025-09-06 00:23:49.151959 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:23:49.151968 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:23:49.151976 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:23:49.151985 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:49.151993 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:49.152002 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:49.152011 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:49.152041 | orchestrator | 2025-09-06 00:23:49.152050 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-06 00:23:49.152059 | orchestrator | 2025-09-06 00:23:49.152068 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:23:49.152077 | orchestrator | Saturday 06 September 2025 00:23:41 +0000 (0:00:03.640) 0:00:03.923 **** 2025-09-06 00:23:49.152086 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-06 00:23:49.152094 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-06 00:23:49.152103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-06 00:23:49.152111 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-06 00:23:49.152120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:23:49.152129 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-06 00:23:49.152137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:23:49.152146 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-06 00:23:49.152154 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-06 00:23:49.152163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:23:49.152171 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-06 00:23:49.152180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-06 00:23:49.152189 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-06 00:23:49.152200 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-06 00:23:49.152210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-06 00:23:49.152220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-06 00:23:49.152230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-06 00:23:49.152240 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:23:49.152251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-06 00:23:49.152261 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:23:49.152270 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-06 00:23:49.152281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-06 00:23:49.152292 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-06 00:23:49.152301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-06 00:23:49.152311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-06 00:23:49.152321 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-06 00:23:49.152330 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-06 00:23:49.152340 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-06 00:23:49.152353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-06 00:23:49.152369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-06 00:23:49.152385 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-06 00:23:49.152400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-06 00:23:49.152410 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-06 00:23:49.152420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-06 00:23:49.152430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:23:49.152440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-06 00:23:49.152449 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:23:49.152460 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-06 00:23:49.152470 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:23:49.152481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:23:49.152491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-06 00:23:49.152507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-06 00:23:49.152517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:23:49.152527 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:23:49.152553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-06 00:23:49.152563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-06 00:23:49.152571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-06 00:23:49.152595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-06 00:23:49.152604 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-06 00:23:49.152613 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-06 00:23:49.152621 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-06 00:23:49.152630 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:23:49.152638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-06 00:23:49.152647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-06 00:23:49.152678 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-06 00:23:49.152687 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:23:49.152696 | orchestrator | 2025-09-06 00:23:49.152705 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-06 00:23:49.152713 | orchestrator | 2025-09-06 00:23:49.152722 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-06 00:23:49.152731 | orchestrator | Saturday 06 September 2025 00:23:42 +0000 (0:00:00.444) 0:00:04.368 **** 2025-09-06 00:23:49.152739 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:49.152748 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:49.152756 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:49.152765 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:23:49.152773 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:23:49.152782 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:49.152790 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:23:49.152799 | orchestrator | 2025-09-06 00:23:49.152808 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-06 00:23:49.152816 | orchestrator | Saturday 06 September 2025 00:23:43 +0000 (0:00:01.141) 0:00:05.510 **** 2025-09-06 00:23:49.152825 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:49.152834 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:23:49.152842 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:23:49.152851 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:23:49.152859 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:23:49.152868 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:23:49.152876 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:23:49.152884 | orchestrator | 2025-09-06 00:23:49.152893 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-06 00:23:49.152902 | orchestrator | Saturday 06 September 2025 00:23:44 +0000 (0:00:01.183) 0:00:06.693 **** 2025-09-06 00:23:49.152912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:23:49.152923 | orchestrator | 2025-09-06 00:23:49.152932 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-06 00:23:49.152940 | orchestrator | Saturday 06 September 2025 00:23:44 +0000 (0:00:00.253) 0:00:06.947 **** 2025-09-06 00:23:49.152949 | orchestrator | changed: [testbed-manager] 2025-09-06 00:23:49.152958 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:23:49.152966 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:49.152974 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:23:49.152983 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:23:49.152992 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:49.153000 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:49.153009 | orchestrator | 2025-09-06 00:23:49.153023 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-06 00:23:49.153032 | orchestrator | Saturday 06 September 2025 00:23:46 +0000 (0:00:01.922) 0:00:08.869 **** 2025-09-06 00:23:49.153041 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:23:49.153051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:23:49.153061 | orchestrator | 2025-09-06 00:23:49.153074 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-06 00:23:49.153083 | orchestrator | Saturday 06 September 2025 00:23:47 +0000 (0:00:00.292) 0:00:09.162 **** 2025-09-06 00:23:49.153092 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:23:49.153100 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:49.153109 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:23:49.153117 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:23:49.153125 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:49.153134 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:49.153143 | orchestrator | 2025-09-06 00:23:49.153151 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-06 00:23:49.153160 | orchestrator | Saturday 06 September 2025 00:23:48 +0000 (0:00:00.961) 0:00:10.123 **** 2025-09-06 00:23:49.153168 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:23:49.153177 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:23:49.153185 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:23:49.153194 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:23:49.153202 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:23:49.153211 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:23:49.153219 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:23:49.153228 | orchestrator | 2025-09-06 00:23:49.153236 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-06 00:23:49.153245 | orchestrator | Saturday 06 September 2025 00:23:48 +0000 (0:00:00.541) 0:00:10.665 **** 2025-09-06 00:23:49.153253 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:23:49.153262 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:23:49.153270 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:23:49.153279 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:23:49.153287 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:23:49.153296 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:23:49.153305 | orchestrator | ok: [testbed-manager] 2025-09-06 00:23:49.153313 | orchestrator | 2025-09-06 00:23:49.153322 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-06 00:23:49.153331 | orchestrator | Saturday 06 September 2025 00:23:49 +0000 (0:00:00.404) 0:00:11.069 **** 2025-09-06 00:23:49.153340 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:23:49.153348 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:23:49.153362 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:24:00.692184 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:24:00.692300 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:24:00.692314 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:24:00.692324 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:24:00.692334 | orchestrator | 2025-09-06 00:24:00.692344 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-06 00:24:00.692356 | orchestrator | Saturday 06 September 2025 00:23:49 +0000 (0:00:00.181) 0:00:11.250 **** 2025-09-06 00:24:00.692368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:00.692395 | orchestrator | 2025-09-06 00:24:00.692406 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-06 00:24:00.692417 | orchestrator | Saturday 06 September 2025 00:23:49 +0000 (0:00:00.253) 0:00:11.504 **** 2025-09-06 00:24:00.692449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:00.692459 | orchestrator | 2025-09-06 00:24:00.692469 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-06 00:24:00.692479 | orchestrator | Saturday 06 September 2025 00:23:49 +0000 (0:00:00.290) 0:00:11.794 **** 2025-09-06 00:24:00.692488 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.692499 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.692508 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.692517 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.692527 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.692536 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.692545 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.692555 | orchestrator | 2025-09-06 00:24:00.692565 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-06 00:24:00.692574 | orchestrator | Saturday 06 September 2025 00:23:51 +0000 (0:00:01.518) 0:00:13.313 **** 2025-09-06 00:24:00.692584 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:24:00.692593 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:24:00.692603 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:24:00.692612 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:24:00.692621 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:24:00.692631 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:24:00.692640 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:24:00.692703 | orchestrator | 2025-09-06 00:24:00.692714 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-06 00:24:00.692724 | orchestrator | Saturday 06 September 2025 00:23:51 +0000 (0:00:00.199) 0:00:13.513 **** 2025-09-06 00:24:00.692736 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.692747 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.692759 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.692769 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.692780 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.692791 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.692801 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.692812 | orchestrator | 2025-09-06 00:24:00.692823 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-06 00:24:00.692834 | orchestrator | Saturday 06 September 2025 00:23:51 +0000 (0:00:00.506) 0:00:14.019 **** 2025-09-06 00:24:00.692845 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:24:00.692856 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:24:00.692868 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:24:00.692879 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:24:00.692890 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:24:00.692902 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:24:00.692912 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:24:00.692922 | orchestrator | 2025-09-06 00:24:00.692934 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-06 00:24:00.692947 | orchestrator | Saturday 06 September 2025 00:23:52 +0000 (0:00:00.208) 0:00:14.228 **** 2025-09-06 00:24:00.692958 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.692968 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:00.692979 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:00.692991 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:00.693002 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:00.693013 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:00.693024 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:00.693034 | orchestrator | 2025-09-06 00:24:00.693046 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-06 00:24:00.693057 | orchestrator | Saturday 06 September 2025 00:23:52 +0000 (0:00:00.510) 0:00:14.738 **** 2025-09-06 00:24:00.693069 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693085 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:00.693095 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:00.693105 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:00.693114 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:00.693124 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:00.693133 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:00.693142 | orchestrator | 2025-09-06 00:24:00.693152 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-06 00:24:00.693162 | orchestrator | Saturday 06 September 2025 00:23:53 +0000 (0:00:01.133) 0:00:15.871 **** 2025-09-06 00:24:00.693171 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693181 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.693190 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.693200 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.693210 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.693219 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.693229 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.693238 | orchestrator | 2025-09-06 00:24:00.693248 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-06 00:24:00.693258 | orchestrator | Saturday 06 September 2025 00:23:54 +0000 (0:00:01.119) 0:00:16.990 **** 2025-09-06 00:24:00.693285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:00.693296 | orchestrator | 2025-09-06 00:24:00.693305 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-06 00:24:00.693315 | orchestrator | Saturday 06 September 2025 00:23:55 +0000 (0:00:00.342) 0:00:17.333 **** 2025-09-06 00:24:00.693324 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:24:00.693334 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:00.693343 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:00.693352 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:00.693362 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:00.693371 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:00.693381 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:00.693390 | orchestrator | 2025-09-06 00:24:00.693400 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-06 00:24:00.693409 | orchestrator | Saturday 06 September 2025 00:23:56 +0000 (0:00:01.203) 0:00:18.537 **** 2025-09-06 00:24:00.693419 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693428 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.693438 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.693447 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.693456 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.693466 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.693475 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.693485 | orchestrator | 2025-09-06 00:24:00.693494 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-06 00:24:00.693504 | orchestrator | Saturday 06 September 2025 00:23:56 +0000 (0:00:00.187) 0:00:18.724 **** 2025-09-06 00:24:00.693513 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693523 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.693532 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.693541 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.693551 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.693560 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.693569 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.693579 | orchestrator | 2025-09-06 00:24:00.693588 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-06 00:24:00.693598 | orchestrator | Saturday 06 September 2025 00:23:56 +0000 (0:00:00.223) 0:00:18.947 **** 2025-09-06 00:24:00.693608 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693617 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.693632 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.693642 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.693670 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.693679 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.693689 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.693699 | orchestrator | 2025-09-06 00:24:00.693708 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-06 00:24:00.693718 | orchestrator | Saturday 06 September 2025 00:23:57 +0000 (0:00:00.195) 0:00:19.143 **** 2025-09-06 00:24:00.693767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:00.693780 | orchestrator | 2025-09-06 00:24:00.693790 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-06 00:24:00.693800 | orchestrator | Saturday 06 September 2025 00:23:57 +0000 (0:00:00.262) 0:00:19.405 **** 2025-09-06 00:24:00.693809 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.693819 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.693828 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.693838 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.693847 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.693857 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.693866 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.693876 | orchestrator | 2025-09-06 00:24:00.693885 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-06 00:24:00.693899 | orchestrator | Saturday 06 September 2025 00:23:57 +0000 (0:00:00.508) 0:00:19.914 **** 2025-09-06 00:24:00.693909 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:24:00.693919 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:24:00.693929 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:24:00.693938 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:24:00.693947 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:24:00.693957 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:24:00.693966 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:24:00.693976 | orchestrator | 2025-09-06 00:24:00.693986 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-06 00:24:00.693995 | orchestrator | Saturday 06 September 2025 00:23:58 +0000 (0:00:00.203) 0:00:20.117 **** 2025-09-06 00:24:00.694005 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.694057 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:00.694069 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:00.694079 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.694088 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:00.694098 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.694107 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.694117 | orchestrator | 2025-09-06 00:24:00.694127 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-06 00:24:00.694136 | orchestrator | Saturday 06 September 2025 00:23:59 +0000 (0:00:00.955) 0:00:21.073 **** 2025-09-06 00:24:00.694146 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.694155 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:00.694165 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:00.694175 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.694184 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:00.694194 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:00.694203 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:00.694213 | orchestrator | 2025-09-06 00:24:00.694222 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-06 00:24:00.694232 | orchestrator | Saturday 06 September 2025 00:23:59 +0000 (0:00:00.624) 0:00:21.697 **** 2025-09-06 00:24:00.694242 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:00.694251 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:00.694261 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:00.694270 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:00.694295 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.882897 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883042 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.883060 | orchestrator | 2025-09-06 00:24:40.883073 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-06 00:24:40.883086 | orchestrator | Saturday 06 September 2025 00:24:00 +0000 (0:00:01.019) 0:00:22.717 **** 2025-09-06 00:24:40.883097 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883108 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883119 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883130 | orchestrator | changed: [testbed-manager] 2025-09-06 00:24:40.883140 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:40.883151 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.883162 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:40.883173 | orchestrator | 2025-09-06 00:24:40.883184 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-06 00:24:40.883195 | orchestrator | Saturday 06 September 2025 00:24:18 +0000 (0:00:17.522) 0:00:40.239 **** 2025-09-06 00:24:40.883205 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.883216 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.883227 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.883237 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.883248 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883259 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883270 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883280 | orchestrator | 2025-09-06 00:24:40.883291 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-06 00:24:40.883302 | orchestrator | Saturday 06 September 2025 00:24:18 +0000 (0:00:00.227) 0:00:40.467 **** 2025-09-06 00:24:40.883313 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.883324 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.883334 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.883345 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.883355 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883366 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883377 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883388 | orchestrator | 2025-09-06 00:24:40.883398 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-06 00:24:40.883410 | orchestrator | Saturday 06 September 2025 00:24:18 +0000 (0:00:00.230) 0:00:40.698 **** 2025-09-06 00:24:40.883422 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.883435 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.883448 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.883461 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.883472 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883484 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883496 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883509 | orchestrator | 2025-09-06 00:24:40.883521 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-06 00:24:40.883534 | orchestrator | Saturday 06 September 2025 00:24:18 +0000 (0:00:00.218) 0:00:40.916 **** 2025-09-06 00:24:40.883549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:40.883565 | orchestrator | 2025-09-06 00:24:40.883578 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-06 00:24:40.883590 | orchestrator | Saturday 06 September 2025 00:24:19 +0000 (0:00:00.278) 0:00:41.194 **** 2025-09-06 00:24:40.883603 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.883616 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.883647 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883659 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.883671 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.883683 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883695 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883733 | orchestrator | 2025-09-06 00:24:40.883747 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-06 00:24:40.883760 | orchestrator | Saturday 06 September 2025 00:24:20 +0000 (0:00:01.592) 0:00:42.787 **** 2025-09-06 00:24:40.883786 | orchestrator | changed: [testbed-manager] 2025-09-06 00:24:40.883798 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:40.883809 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:40.883819 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:40.883830 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.883840 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:40.883851 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:40.883861 | orchestrator | 2025-09-06 00:24:40.883872 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-06 00:24:40.883883 | orchestrator | Saturday 06 September 2025 00:24:21 +0000 (0:00:01.100) 0:00:43.887 **** 2025-09-06 00:24:40.883894 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.883905 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.883916 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.883926 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.883937 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.883947 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.883958 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.883968 | orchestrator | 2025-09-06 00:24:40.883979 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-06 00:24:40.883990 | orchestrator | Saturday 06 September 2025 00:24:22 +0000 (0:00:00.811) 0:00:44.699 **** 2025-09-06 00:24:40.884002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:40.884014 | orchestrator | 2025-09-06 00:24:40.884025 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-06 00:24:40.884036 | orchestrator | Saturday 06 September 2025 00:24:22 +0000 (0:00:00.298) 0:00:44.997 **** 2025-09-06 00:24:40.884047 | orchestrator | changed: [testbed-manager] 2025-09-06 00:24:40.884057 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:40.884068 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:40.884078 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:40.884089 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:40.884100 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:40.884110 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.884121 | orchestrator | 2025-09-06 00:24:40.884148 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-06 00:24:40.884159 | orchestrator | Saturday 06 September 2025 00:24:23 +0000 (0:00:01.020) 0:00:46.018 **** 2025-09-06 00:24:40.884170 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:24:40.884181 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:24:40.884191 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:24:40.884202 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:24:40.884213 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:24:40.884224 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:24:40.884234 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:24:40.884245 | orchestrator | 2025-09-06 00:24:40.884255 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-06 00:24:40.884266 | orchestrator | Saturday 06 September 2025 00:24:24 +0000 (0:00:00.301) 0:00:46.319 **** 2025-09-06 00:24:40.884277 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:40.884287 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:40.884298 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.884309 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:40.884319 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:40.884330 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:40.884341 | orchestrator | changed: [testbed-manager] 2025-09-06 00:24:40.884360 | orchestrator | 2025-09-06 00:24:40.884371 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-06 00:24:40.884382 | orchestrator | Saturday 06 September 2025 00:24:35 +0000 (0:00:11.208) 0:00:57.528 **** 2025-09-06 00:24:40.884393 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.884403 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.884414 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.884425 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.884435 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.884446 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.884456 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.884467 | orchestrator | 2025-09-06 00:24:40.884478 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-06 00:24:40.884489 | orchestrator | Saturday 06 September 2025 00:24:36 +0000 (0:00:01.163) 0:00:58.691 **** 2025-09-06 00:24:40.884499 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.884510 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.884521 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.884531 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.884542 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.884552 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.884563 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.884573 | orchestrator | 2025-09-06 00:24:40.884584 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-06 00:24:40.884595 | orchestrator | Saturday 06 September 2025 00:24:37 +0000 (0:00:01.018) 0:00:59.710 **** 2025-09-06 00:24:40.884606 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.884616 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.884643 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.884654 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.884665 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.884676 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.884686 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.884697 | orchestrator | 2025-09-06 00:24:40.884707 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-06 00:24:40.884718 | orchestrator | Saturday 06 September 2025 00:24:37 +0000 (0:00:00.227) 0:00:59.938 **** 2025-09-06 00:24:40.884729 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.884739 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.884750 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.884761 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.884771 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.884782 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.884792 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.884803 | orchestrator | 2025-09-06 00:24:40.884814 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-06 00:24:40.884824 | orchestrator | Saturday 06 September 2025 00:24:38 +0000 (0:00:00.222) 0:01:00.161 **** 2025-09-06 00:24:40.884836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:24:40.884847 | orchestrator | 2025-09-06 00:24:40.884859 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-06 00:24:40.884869 | orchestrator | Saturday 06 September 2025 00:24:38 +0000 (0:00:00.261) 0:01:00.422 **** 2025-09-06 00:24:40.884880 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.884891 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.884901 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.884912 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.884923 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.884934 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.884944 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.884955 | orchestrator | 2025-09-06 00:24:40.884965 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-06 00:24:40.884976 | orchestrator | Saturday 06 September 2025 00:24:39 +0000 (0:00:01.610) 0:01:02.032 **** 2025-09-06 00:24:40.884994 | orchestrator | changed: [testbed-manager] 2025-09-06 00:24:40.885005 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:24:40.885015 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:24:40.885026 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:24:40.885037 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:24:40.885047 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:24:40.885058 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:24:40.885069 | orchestrator | 2025-09-06 00:24:40.885079 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-06 00:24:40.885090 | orchestrator | Saturday 06 September 2025 00:24:40 +0000 (0:00:00.643) 0:01:02.676 **** 2025-09-06 00:24:40.885101 | orchestrator | ok: [testbed-manager] 2025-09-06 00:24:40.885111 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:24:40.885122 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:24:40.885133 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:24:40.885143 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:24:40.885154 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:24:40.885164 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:24:40.885175 | orchestrator | 2025-09-06 00:24:40.885192 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-06 00:27:03.142011 | orchestrator | Saturday 06 September 2025 00:24:40 +0000 (0:00:00.232) 0:01:02.909 **** 2025-09-06 00:27:03.142190 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:03.142207 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:03.142219 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:03.142230 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:03.142241 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:03.142253 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:03.142264 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:03.142275 | orchestrator | 2025-09-06 00:27:03.142287 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-06 00:27:03.142298 | orchestrator | Saturday 06 September 2025 00:24:42 +0000 (0:00:01.169) 0:01:04.078 **** 2025-09-06 00:27:03.142309 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:03.142321 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:03.142332 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:03.142343 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:03.142353 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:03.142364 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:03.142375 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:03.142386 | orchestrator | 2025-09-06 00:27:03.142397 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-06 00:27:03.142408 | orchestrator | Saturday 06 September 2025 00:24:43 +0000 (0:00:01.916) 0:01:05.995 **** 2025-09-06 00:27:03.142419 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:03.142430 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:03.142440 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:03.142451 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:03.142462 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:03.142473 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:03.142483 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:03.142494 | orchestrator | 2025-09-06 00:27:03.142505 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-06 00:27:03.142516 | orchestrator | Saturday 06 September 2025 00:24:46 +0000 (0:00:02.357) 0:01:08.353 **** 2025-09-06 00:27:03.142546 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:03.142560 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:03.142573 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:03.142585 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:03.142598 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:03.142611 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:03.142623 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:03.142635 | orchestrator | 2025-09-06 00:27:03.142648 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-06 00:27:03.142686 | orchestrator | Saturday 06 September 2025 00:25:23 +0000 (0:00:37.020) 0:01:45.374 **** 2025-09-06 00:27:03.142716 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:03.142729 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:03.142741 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:03.142753 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:03.142766 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:03.142779 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:03.142793 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:03.142805 | orchestrator | 2025-09-06 00:27:03.142819 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-06 00:27:03.142832 | orchestrator | Saturday 06 September 2025 00:26:42 +0000 (0:01:19.156) 0:03:04.530 **** 2025-09-06 00:27:03.142845 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:03.142857 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:03.142871 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:03.142884 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:03.142897 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:03.142907 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:03.142918 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:03.142929 | orchestrator | 2025-09-06 00:27:03.142939 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-06 00:27:03.142951 | orchestrator | Saturday 06 September 2025 00:26:44 +0000 (0:00:01.743) 0:03:06.274 **** 2025-09-06 00:27:03.142962 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:03.142972 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:03.142983 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:03.142994 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:03.143009 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:03.143020 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:03.143031 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:03.143041 | orchestrator | 2025-09-06 00:27:03.143052 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-06 00:27:03.143063 | orchestrator | Saturday 06 September 2025 00:26:54 +0000 (0:00:10.454) 0:03:16.728 **** 2025-09-06 00:27:03.143083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-06 00:27:03.143099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-06 00:27:03.143135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-06 00:27:03.143149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-06 00:27:03.143169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-06 00:27:03.143180 | orchestrator | 2025-09-06 00:27:03.143192 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-06 00:27:03.143203 | orchestrator | Saturday 06 September 2025 00:26:55 +0000 (0:00:00.325) 0:03:17.053 **** 2025-09-06 00:27:03.143214 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-06 00:27:03.143224 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:03.143235 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-06 00:27:03.143246 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-06 00:27:03.143257 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:27:03.143268 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:27:03.143279 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-06 00:27:03.143289 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:27:03.143300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:27:03.143311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:27:03.143322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:27:03.143332 | orchestrator | 2025-09-06 00:27:03.143343 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-06 00:27:03.143354 | orchestrator | Saturday 06 September 2025 00:26:55 +0000 (0:00:00.593) 0:03:17.647 **** 2025-09-06 00:27:03.143365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-06 00:27:03.143377 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-06 00:27:03.143388 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-06 00:27:03.143399 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-06 00:27:03.143409 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-06 00:27:03.143420 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-06 00:27:03.143437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-06 00:27:03.143448 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-06 00:27:03.143459 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-06 00:27:03.143469 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-06 00:27:03.143480 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:03.143491 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-06 00:27:03.143502 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-06 00:27:03.143513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-06 00:27:03.143524 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-06 00:27:03.143566 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-06 00:27:03.143577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-06 00:27:03.143595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-06 00:27:03.143606 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-06 00:27:03.143616 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-06 00:27:03.143627 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-06 00:27:03.143645 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-06 00:27:05.279905 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-06 00:27:05.280015 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-06 00:27:05.280031 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-06 00:27:05.280042 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-06 00:27:05.280053 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-06 00:27:05.280065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-06 00:27:05.280076 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-06 00:27:05.280088 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:27:05.280100 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-06 00:27:05.280110 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-06 00:27:05.280121 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:27:05.280132 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-06 00:27:05.280143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-06 00:27:05.280153 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-06 00:27:05.280164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-06 00:27:05.280174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-06 00:27:05.280185 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-06 00:27:05.280196 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-06 00:27:05.280206 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-06 00:27:05.280217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-06 00:27:05.280228 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-06 00:27:05.280239 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:27:05.280250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-06 00:27:05.280260 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-06 00:27:05.280271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-06 00:27:05.280281 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-06 00:27:05.280292 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-06 00:27:05.280320 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-06 00:27:05.280332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-06 00:27:05.280364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-06 00:27:05.280376 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280418 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-06 00:27:05.280429 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-06 00:27:05.280443 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-06 00:27:05.280456 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-06 00:27:05.280470 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-06 00:27:05.280482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-06 00:27:05.280495 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-06 00:27:05.280508 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-06 00:27:05.280521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-06 00:27:05.280577 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-06 00:27:05.280592 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280605 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-06 00:27:05.280617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-06 00:27:05.280630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-06 00:27:05.280642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-06 00:27:05.280655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-06 00:27:05.280668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-06 00:27:05.280682 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-06 00:27:05.280695 | orchestrator | 2025-09-06 00:27:05.280709 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-06 00:27:05.280722 | orchestrator | Saturday 06 September 2025 00:27:03 +0000 (0:00:07.515) 0:03:25.163 **** 2025-09-06 00:27:05.280735 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280747 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280799 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-06 00:27:05.280823 | orchestrator | 2025-09-06 00:27:05.280834 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-06 00:27:05.280852 | orchestrator | Saturday 06 September 2025 00:27:03 +0000 (0:00:00.618) 0:03:25.781 **** 2025-09-06 00:27:05.280863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-06 00:27:05.280874 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:05.280884 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-06 00:27:05.280895 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-06 00:27:05.280906 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:27:05.280917 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:27:05.280928 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-06 00:27:05.280939 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:27:05.280950 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-06 00:27:05.280960 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-06 00:27:05.280972 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-06 00:27:05.280982 | orchestrator | 2025-09-06 00:27:05.281002 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-06 00:27:05.281014 | orchestrator | Saturday 06 September 2025 00:27:04 +0000 (0:00:00.617) 0:03:26.399 **** 2025-09-06 00:27:05.281025 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-06 00:27:05.281036 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:05.281047 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-06 00:27:05.281057 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-06 00:27:05.281068 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:27:05.281079 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-06 00:27:05.281089 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:27:05.281100 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:27:05.281110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-06 00:27:05.281121 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-06 00:27:05.281132 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-06 00:27:05.281143 | orchestrator | 2025-09-06 00:27:05.281153 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-06 00:27:05.281164 | orchestrator | Saturday 06 September 2025 00:27:04 +0000 (0:00:00.633) 0:03:27.033 **** 2025-09-06 00:27:05.281175 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:05.281186 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:27:05.281196 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:27:05.281207 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:27:05.281218 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:27:05.281235 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:27:17.221067 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:27:17.221189 | orchestrator | 2025-09-06 00:27:17.221206 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-06 00:27:17.221217 | orchestrator | Saturday 06 September 2025 00:27:05 +0000 (0:00:00.276) 0:03:27.310 **** 2025-09-06 00:27:17.221227 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:17.221238 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:17.221248 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:17.221258 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:17.221291 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:17.221301 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:17.221310 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:17.221320 | orchestrator | 2025-09-06 00:27:17.221330 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-06 00:27:17.221339 | orchestrator | Saturday 06 September 2025 00:27:11 +0000 (0:00:05.887) 0:03:33.197 **** 2025-09-06 00:27:17.221349 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-06 00:27:17.221359 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-06 00:27:17.221369 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:17.221378 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-06 00:27:17.221388 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:27:17.221397 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-06 00:27:17.221407 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:27:17.221417 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-06 00:27:17.221426 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:27:17.221436 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-06 00:27:17.221445 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:27:17.221459 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:27:17.221469 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-06 00:27:17.221478 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:27:17.221488 | orchestrator | 2025-09-06 00:27:17.221498 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-06 00:27:17.221507 | orchestrator | Saturday 06 September 2025 00:27:11 +0000 (0:00:00.313) 0:03:33.511 **** 2025-09-06 00:27:17.221563 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-06 00:27:17.221575 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-06 00:27:17.221584 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-06 00:27:17.221594 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-06 00:27:17.221604 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-06 00:27:17.221616 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-06 00:27:17.221628 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-06 00:27:17.221640 | orchestrator | 2025-09-06 00:27:17.221651 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-06 00:27:17.221663 | orchestrator | Saturday 06 September 2025 00:27:12 +0000 (0:00:01.048) 0:03:34.559 **** 2025-09-06 00:27:17.221676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:27:17.221690 | orchestrator | 2025-09-06 00:27:17.221703 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-06 00:27:17.221714 | orchestrator | Saturday 06 September 2025 00:27:13 +0000 (0:00:00.608) 0:03:35.168 **** 2025-09-06 00:27:17.221725 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:17.221736 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:17.221748 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:17.221759 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:17.221770 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:17.221780 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:17.221792 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:17.221802 | orchestrator | 2025-09-06 00:27:17.221829 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-06 00:27:17.221841 | orchestrator | Saturday 06 September 2025 00:27:14 +0000 (0:00:01.302) 0:03:36.471 **** 2025-09-06 00:27:17.221853 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:17.221863 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:17.221874 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:17.221886 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:17.221897 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:17.221908 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:17.221926 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:17.221937 | orchestrator | 2025-09-06 00:27:17.221949 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-06 00:27:17.221961 | orchestrator | Saturday 06 September 2025 00:27:15 +0000 (0:00:00.592) 0:03:37.064 **** 2025-09-06 00:27:17.221971 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:17.221981 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:17.221990 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:17.222000 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:17.222010 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:17.222078 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:17.222089 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:17.222098 | orchestrator | 2025-09-06 00:27:17.222108 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-06 00:27:17.222118 | orchestrator | Saturday 06 September 2025 00:27:15 +0000 (0:00:00.652) 0:03:37.716 **** 2025-09-06 00:27:17.222128 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:17.222137 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:17.222147 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:17.222156 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:17.222166 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:17.222176 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:17.222185 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:17.222195 | orchestrator | 2025-09-06 00:27:17.222204 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-06 00:27:17.222214 | orchestrator | Saturday 06 September 2025 00:27:16 +0000 (0:00:00.607) 0:03:38.324 **** 2025-09-06 00:27:17.222245 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117044.3473601, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222260 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117080.2202156, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222271 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117082.9786165, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222281 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117070.200132, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222297 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117070.2050822, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222317 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117084.3397892, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222327 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757117077.19287, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:17.222354 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245187 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245313 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245330 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245342 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245378 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245390 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 00:27:34.245402 | orchestrator | 2025-09-06 00:27:34.245416 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-06 00:27:34.245428 | orchestrator | Saturday 06 September 2025 00:27:17 +0000 (0:00:00.918) 0:03:39.242 **** 2025-09-06 00:27:34.245440 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:34.245451 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:34.245462 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:34.245472 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:34.245483 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:34.245493 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:34.245560 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:34.245573 | orchestrator | 2025-09-06 00:27:34.245584 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-06 00:27:34.245595 | orchestrator | Saturday 06 September 2025 00:27:18 +0000 (0:00:01.084) 0:03:40.327 **** 2025-09-06 00:27:34.245605 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:34.245616 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:34.245626 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:34.245637 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:34.245664 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:34.245675 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:34.245686 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:34.245696 | orchestrator | 2025-09-06 00:27:34.245707 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-06 00:27:34.245720 | orchestrator | Saturday 06 September 2025 00:27:19 +0000 (0:00:01.161) 0:03:41.489 **** 2025-09-06 00:27:34.245732 | orchestrator | changed: [testbed-manager] 2025-09-06 00:27:34.245745 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:34.245757 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:34.245769 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:34.245781 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:34.245795 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:34.245808 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:34.245820 | orchestrator | 2025-09-06 00:27:34.245833 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-06 00:27:34.245845 | orchestrator | Saturday 06 September 2025 00:27:20 +0000 (0:00:01.195) 0:03:42.684 **** 2025-09-06 00:27:34.245868 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:27:34.245880 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:27:34.245892 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:27:34.245904 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:27:34.245916 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:27:34.245927 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:27:34.245939 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:27:34.245952 | orchestrator | 2025-09-06 00:27:34.245964 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-06 00:27:34.245977 | orchestrator | Saturday 06 September 2025 00:27:21 +0000 (0:00:00.373) 0:03:43.058 **** 2025-09-06 00:27:34.245989 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246078 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246095 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246108 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246119 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246129 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:34.246140 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:34.246151 | orchestrator | 2025-09-06 00:27:34.246162 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-06 00:27:34.246173 | orchestrator | Saturday 06 September 2025 00:27:21 +0000 (0:00:00.792) 0:03:43.851 **** 2025-09-06 00:27:34.246186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:27:34.246199 | orchestrator | 2025-09-06 00:27:34.246209 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-06 00:27:34.246220 | orchestrator | Saturday 06 September 2025 00:27:22 +0000 (0:00:00.416) 0:03:44.267 **** 2025-09-06 00:27:34.246231 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246242 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:27:34.246253 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:27:34.246263 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:27:34.246274 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:27:34.246284 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:27:34.246295 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:27:34.246305 | orchestrator | 2025-09-06 00:27:34.246316 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-06 00:27:34.246327 | orchestrator | Saturday 06 September 2025 00:27:30 +0000 (0:00:07.972) 0:03:52.239 **** 2025-09-06 00:27:34.246337 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246353 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246365 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246375 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:34.246386 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246396 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:34.246407 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246418 | orchestrator | 2025-09-06 00:27:34.246429 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-06 00:27:34.246440 | orchestrator | Saturday 06 September 2025 00:27:31 +0000 (0:00:01.256) 0:03:53.496 **** 2025-09-06 00:27:34.246450 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246461 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246471 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246482 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246492 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:34.246524 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:34.246536 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246546 | orchestrator | 2025-09-06 00:27:34.246557 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-06 00:27:34.246568 | orchestrator | Saturday 06 September 2025 00:27:33 +0000 (0:00:01.784) 0:03:55.281 **** 2025-09-06 00:27:34.246579 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246598 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246608 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246619 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246630 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246640 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:34.246651 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:34.246662 | orchestrator | 2025-09-06 00:27:34.246673 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-06 00:27:34.246685 | orchestrator | Saturday 06 September 2025 00:27:33 +0000 (0:00:00.306) 0:03:55.587 **** 2025-09-06 00:27:34.246695 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246706 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246716 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246727 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246737 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246748 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:27:34.246759 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:27:34.246769 | orchestrator | 2025-09-06 00:27:34.246780 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-06 00:27:34.246791 | orchestrator | Saturday 06 September 2025 00:27:33 +0000 (0:00:00.407) 0:03:55.995 **** 2025-09-06 00:27:34.246801 | orchestrator | ok: [testbed-manager] 2025-09-06 00:27:34.246812 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:27:34.246822 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:27:34.246833 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:27:34.246844 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:27:34.246863 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:28:45.278364 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:28:45.278530 | orchestrator | 2025-09-06 00:28:45.278548 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-06 00:28:45.278561 | orchestrator | Saturday 06 September 2025 00:27:34 +0000 (0:00:00.280) 0:03:56.276 **** 2025-09-06 00:28:45.278571 | orchestrator | ok: [testbed-manager] 2025-09-06 00:28:45.278581 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:28:45.278590 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:28:45.278600 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:28:45.278609 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:28:45.278619 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:28:45.278628 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:28:45.278638 | orchestrator | 2025-09-06 00:28:45.278647 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-06 00:28:45.278657 | orchestrator | Saturday 06 September 2025 00:27:40 +0000 (0:00:06.127) 0:04:02.403 **** 2025-09-06 00:28:45.278669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:28:45.278681 | orchestrator | 2025-09-06 00:28:45.278691 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-06 00:28:45.278701 | orchestrator | Saturday 06 September 2025 00:27:40 +0000 (0:00:00.363) 0:04:02.767 **** 2025-09-06 00:28:45.278711 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278720 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-06 00:28:45.278730 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278739 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-06 00:28:45.278749 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:28:45.278759 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278768 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-06 00:28:45.278778 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:28:45.278787 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278796 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-06 00:28:45.278806 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:28:45.278838 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278849 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-06 00:28:45.278858 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:28:45.278867 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278877 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-06 00:28:45.278886 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:28:45.278895 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:28:45.278907 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-06 00:28:45.278919 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-06 00:28:45.278931 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:28:45.278941 | orchestrator | 2025-09-06 00:28:45.278953 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-06 00:28:45.278963 | orchestrator | Saturday 06 September 2025 00:27:41 +0000 (0:00:00.356) 0:04:03.123 **** 2025-09-06 00:28:45.278990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:28:45.279001 | orchestrator | 2025-09-06 00:28:45.279012 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-06 00:28:45.279024 | orchestrator | Saturday 06 September 2025 00:27:41 +0000 (0:00:00.436) 0:04:03.560 **** 2025-09-06 00:28:45.279035 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-06 00:28:45.279047 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-06 00:28:45.279058 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:28:45.279070 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-06 00:28:45.279081 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:28:45.279092 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-06 00:28:45.279103 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:28:45.279114 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:28:45.279126 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-06 00:28:45.279137 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-06 00:28:45.279147 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:28:45.279158 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:28:45.279169 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-06 00:28:45.279181 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:28:45.279191 | orchestrator | 2025-09-06 00:28:45.279200 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-06 00:28:45.279209 | orchestrator | Saturday 06 September 2025 00:27:41 +0000 (0:00:00.324) 0:04:03.884 **** 2025-09-06 00:28:45.279219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:28:45.279228 | orchestrator | 2025-09-06 00:28:45.279238 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-06 00:28:45.279247 | orchestrator | Saturday 06 September 2025 00:27:42 +0000 (0:00:00.435) 0:04:04.320 **** 2025-09-06 00:28:45.279257 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.279281 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.279292 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.279301 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.279311 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.279320 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.279329 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.279338 | orchestrator | 2025-09-06 00:28:45.279348 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-06 00:28:45.279365 | orchestrator | Saturday 06 September 2025 00:28:17 +0000 (0:00:34.990) 0:04:39.311 **** 2025-09-06 00:28:45.279375 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.279384 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.279393 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.279403 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.279412 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.279421 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.279431 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.279440 | orchestrator | 2025-09-06 00:28:45.279449 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-06 00:28:45.279478 | orchestrator | Saturday 06 September 2025 00:28:25 +0000 (0:00:08.185) 0:04:47.497 **** 2025-09-06 00:28:45.279488 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.279497 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.279506 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.279516 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.279525 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.279534 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.279544 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.279553 | orchestrator | 2025-09-06 00:28:45.279563 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-06 00:28:45.279572 | orchestrator | Saturday 06 September 2025 00:28:33 +0000 (0:00:07.861) 0:04:55.358 **** 2025-09-06 00:28:45.279582 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:28:45.279591 | orchestrator | ok: [testbed-manager] 2025-09-06 00:28:45.279600 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:28:45.279610 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:28:45.279619 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:28:45.279629 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:28:45.279638 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:28:45.279647 | orchestrator | 2025-09-06 00:28:45.279657 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-06 00:28:45.279667 | orchestrator | Saturday 06 September 2025 00:28:35 +0000 (0:00:01.686) 0:04:57.044 **** 2025-09-06 00:28:45.279676 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.279686 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.279695 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.279704 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.279714 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.279723 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.279732 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.279742 | orchestrator | 2025-09-06 00:28:45.279751 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-06 00:28:45.279761 | orchestrator | Saturday 06 September 2025 00:28:41 +0000 (0:00:06.222) 0:05:03.267 **** 2025-09-06 00:28:45.279771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:28:45.279783 | orchestrator | 2025-09-06 00:28:45.279792 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-06 00:28:45.279807 | orchestrator | Saturday 06 September 2025 00:28:41 +0000 (0:00:00.513) 0:05:03.780 **** 2025-09-06 00:28:45.279816 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.279826 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.279835 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.279844 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.279854 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.279863 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.279873 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.279882 | orchestrator | 2025-09-06 00:28:45.279891 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-06 00:28:45.279907 | orchestrator | Saturday 06 September 2025 00:28:42 +0000 (0:00:00.733) 0:05:04.514 **** 2025-09-06 00:28:45.279917 | orchestrator | ok: [testbed-manager] 2025-09-06 00:28:45.279926 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:28:45.279936 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:28:45.279945 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:28:45.279955 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:28:45.279964 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:28:45.279973 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:28:45.279983 | orchestrator | 2025-09-06 00:28:45.279992 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-06 00:28:45.280002 | orchestrator | Saturday 06 September 2025 00:28:44 +0000 (0:00:01.742) 0:05:06.257 **** 2025-09-06 00:28:45.280011 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:28:45.280021 | orchestrator | changed: [testbed-manager] 2025-09-06 00:28:45.280030 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:28:45.280039 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:28:45.280049 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:28:45.280058 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:28:45.280067 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:28:45.280076 | orchestrator | 2025-09-06 00:28:45.280086 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-06 00:28:45.280095 | orchestrator | Saturday 06 September 2025 00:28:44 +0000 (0:00:00.781) 0:05:07.038 **** 2025-09-06 00:28:45.280105 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:28:45.280114 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:28:45.280123 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:28:45.280133 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:28:45.280142 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:28:45.280151 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:28:45.280160 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:28:45.280170 | orchestrator | 2025-09-06 00:28:45.280179 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-06 00:28:45.280195 | orchestrator | Saturday 06 September 2025 00:28:45 +0000 (0:00:00.266) 0:05:07.305 **** 2025-09-06 00:29:11.867813 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:11.867932 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:11.867948 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:11.867960 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:11.867971 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:11.867982 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:11.867993 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:11.868004 | orchestrator | 2025-09-06 00:29:11.868016 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-06 00:29:11.868027 | orchestrator | Saturday 06 September 2025 00:28:45 +0000 (0:00:00.367) 0:05:07.672 **** 2025-09-06 00:29:11.868038 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.868049 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:29:11.868060 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:29:11.868071 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:29:11.868081 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:29:11.868092 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:29:11.868102 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:29:11.868113 | orchestrator | 2025-09-06 00:29:11.868124 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-06 00:29:11.868135 | orchestrator | Saturday 06 September 2025 00:28:45 +0000 (0:00:00.325) 0:05:07.998 **** 2025-09-06 00:29:11.868146 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:11.868157 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:11.868168 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:11.868178 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:11.868189 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:11.868199 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:11.868210 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:11.868245 | orchestrator | 2025-09-06 00:29:11.868256 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-06 00:29:11.868267 | orchestrator | Saturday 06 September 2025 00:28:46 +0000 (0:00:00.266) 0:05:08.265 **** 2025-09-06 00:29:11.868278 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.868288 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:29:11.868299 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:29:11.868309 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:29:11.868320 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:29:11.868330 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:29:11.868341 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:29:11.868353 | orchestrator | 2025-09-06 00:29:11.868391 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-06 00:29:11.868416 | orchestrator | Saturday 06 September 2025 00:28:46 +0000 (0:00:00.296) 0:05:08.561 **** 2025-09-06 00:29:11.868429 | orchestrator | ok: [testbed-manager] =>  2025-09-06 00:29:11.868461 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868473 | orchestrator | ok: [testbed-node-0] =>  2025-09-06 00:29:11.868485 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868498 | orchestrator | ok: [testbed-node-1] =>  2025-09-06 00:29:11.868510 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868522 | orchestrator | ok: [testbed-node-2] =>  2025-09-06 00:29:11.868535 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868547 | orchestrator | ok: [testbed-node-3] =>  2025-09-06 00:29:11.868559 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868570 | orchestrator | ok: [testbed-node-4] =>  2025-09-06 00:29:11.868583 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868595 | orchestrator | ok: [testbed-node-5] =>  2025-09-06 00:29:11.868607 | orchestrator |  docker_version: 5:27.5.1 2025-09-06 00:29:11.868619 | orchestrator | 2025-09-06 00:29:11.868632 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-06 00:29:11.868644 | orchestrator | Saturday 06 September 2025 00:28:46 +0000 (0:00:00.282) 0:05:08.844 **** 2025-09-06 00:29:11.868657 | orchestrator | ok: [testbed-manager] =>  2025-09-06 00:29:11.868669 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868682 | orchestrator | ok: [testbed-node-0] =>  2025-09-06 00:29:11.868695 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868706 | orchestrator | ok: [testbed-node-1] =>  2025-09-06 00:29:11.868716 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868727 | orchestrator | ok: [testbed-node-2] =>  2025-09-06 00:29:11.868737 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868748 | orchestrator | ok: [testbed-node-3] =>  2025-09-06 00:29:11.868758 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868769 | orchestrator | ok: [testbed-node-4] =>  2025-09-06 00:29:11.868779 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868789 | orchestrator | ok: [testbed-node-5] =>  2025-09-06 00:29:11.868800 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-06 00:29:11.868810 | orchestrator | 2025-09-06 00:29:11.868821 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-06 00:29:11.868831 | orchestrator | Saturday 06 September 2025 00:28:47 +0000 (0:00:00.277) 0:05:09.122 **** 2025-09-06 00:29:11.868842 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:11.868852 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:11.868863 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:11.868873 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:11.868884 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:11.868894 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:11.868904 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:11.868915 | orchestrator | 2025-09-06 00:29:11.868925 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-06 00:29:11.868936 | orchestrator | Saturday 06 September 2025 00:28:47 +0000 (0:00:00.250) 0:05:09.372 **** 2025-09-06 00:29:11.868946 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:11.868966 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:11.868976 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:11.868987 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:11.868997 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:11.869008 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:11.869018 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:11.869029 | orchestrator | 2025-09-06 00:29:11.869039 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-06 00:29:11.869050 | orchestrator | Saturday 06 September 2025 00:28:47 +0000 (0:00:00.266) 0:05:09.638 **** 2025-09-06 00:29:11.869078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:29:11.869093 | orchestrator | 2025-09-06 00:29:11.869104 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-06 00:29:11.869115 | orchestrator | Saturday 06 September 2025 00:28:48 +0000 (0:00:00.418) 0:05:10.057 **** 2025-09-06 00:29:11.869125 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.869136 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:29:11.869147 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:29:11.869157 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:29:11.869168 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:29:11.869178 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:29:11.869189 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:29:11.869200 | orchestrator | 2025-09-06 00:29:11.869210 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-06 00:29:11.869221 | orchestrator | Saturday 06 September 2025 00:28:48 +0000 (0:00:00.860) 0:05:10.918 **** 2025-09-06 00:29:11.869232 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.869242 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:29:11.869253 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:29:11.869263 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:29:11.869274 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:29:11.869284 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:29:11.869294 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:29:11.869305 | orchestrator | 2025-09-06 00:29:11.869316 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-06 00:29:11.869327 | orchestrator | Saturday 06 September 2025 00:28:52 +0000 (0:00:03.232) 0:05:14.150 **** 2025-09-06 00:29:11.869338 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-06 00:29:11.869349 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-06 00:29:11.869360 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-06 00:29:11.869370 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-06 00:29:11.869381 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-06 00:29:11.869392 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-06 00:29:11.869402 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:11.869413 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-06 00:29:11.869423 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-06 00:29:11.869434 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-06 00:29:11.869472 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:11.869500 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-06 00:29:11.869512 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-06 00:29:11.869522 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-06 00:29:11.869533 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:11.869544 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-06 00:29:11.869555 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-06 00:29:11.869565 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-06 00:29:11.869584 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:11.869595 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-06 00:29:11.869606 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-06 00:29:11.869616 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-06 00:29:11.869627 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:11.869638 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:11.869653 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-06 00:29:11.869664 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-06 00:29:11.869675 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-06 00:29:11.869686 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:11.869696 | orchestrator | 2025-09-06 00:29:11.869707 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-06 00:29:11.869718 | orchestrator | Saturday 06 September 2025 00:28:52 +0000 (0:00:00.570) 0:05:14.720 **** 2025-09-06 00:29:11.869729 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.869739 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:11.869750 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:11.869760 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:11.869771 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:11.869782 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:11.869792 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:11.869803 | orchestrator | 2025-09-06 00:29:11.869813 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-06 00:29:11.869824 | orchestrator | Saturday 06 September 2025 00:28:59 +0000 (0:00:06.382) 0:05:21.103 **** 2025-09-06 00:29:11.869834 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:11.869845 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:11.869856 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.869866 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:11.869876 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:11.869887 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:11.869898 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:11.869908 | orchestrator | 2025-09-06 00:29:11.869919 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-06 00:29:11.869929 | orchestrator | Saturday 06 September 2025 00:29:00 +0000 (0:00:01.236) 0:05:22.339 **** 2025-09-06 00:29:11.869940 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:11.869951 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:11.869961 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:11.869972 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:11.869982 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:11.869993 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:11.870003 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:11.870014 | orchestrator | 2025-09-06 00:29:11.870084 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-06 00:29:11.870096 | orchestrator | Saturday 06 September 2025 00:29:08 +0000 (0:00:08.269) 0:05:30.608 **** 2025-09-06 00:29:11.870106 | orchestrator | changed: [testbed-manager] 2025-09-06 00:29:11.870117 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:11.870127 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:11.870147 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.228347 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.228475 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.228492 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.228505 | orchestrator | 2025-09-06 00:29:56.228518 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-06 00:29:56.228530 | orchestrator | Saturday 06 September 2025 00:29:11 +0000 (0:00:03.284) 0:05:33.893 **** 2025-09-06 00:29:56.228542 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.228554 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.228565 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.228597 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.228608 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.228619 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.228629 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.228640 | orchestrator | 2025-09-06 00:29:56.228651 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-06 00:29:56.228661 | orchestrator | Saturday 06 September 2025 00:29:13 +0000 (0:00:01.307) 0:05:35.201 **** 2025-09-06 00:29:56.228672 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.228683 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.228693 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.228704 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.228714 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.228725 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.228735 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.228746 | orchestrator | 2025-09-06 00:29:56.228757 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-06 00:29:56.228767 | orchestrator | Saturday 06 September 2025 00:29:14 +0000 (0:00:01.292) 0:05:36.494 **** 2025-09-06 00:29:56.228778 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.228788 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.228799 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.228809 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.228820 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.228830 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.228841 | orchestrator | changed: [testbed-manager] 2025-09-06 00:29:56.228851 | orchestrator | 2025-09-06 00:29:56.228862 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-06 00:29:56.228873 | orchestrator | Saturday 06 September 2025 00:29:15 +0000 (0:00:00.764) 0:05:37.258 **** 2025-09-06 00:29:56.228883 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.228896 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.228909 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.228921 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.228933 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.228946 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.228958 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.228970 | orchestrator | 2025-09-06 00:29:56.228983 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-06 00:29:56.228996 | orchestrator | Saturday 06 September 2025 00:29:25 +0000 (0:00:10.344) 0:05:47.603 **** 2025-09-06 00:29:56.229008 | orchestrator | changed: [testbed-manager] 2025-09-06 00:29:56.229021 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.229032 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.229044 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.229057 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.229069 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.229081 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.229093 | orchestrator | 2025-09-06 00:29:56.229106 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-06 00:29:56.229128 | orchestrator | Saturday 06 September 2025 00:29:26 +0000 (0:00:00.961) 0:05:48.564 **** 2025-09-06 00:29:56.229140 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.229153 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.229166 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.229178 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.229190 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.229201 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.229214 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.229226 | orchestrator | 2025-09-06 00:29:56.229239 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-06 00:29:56.229251 | orchestrator | Saturday 06 September 2025 00:29:35 +0000 (0:00:08.867) 0:05:57.432 **** 2025-09-06 00:29:56.229269 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.229280 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.229290 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.229301 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.229312 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.229322 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.229333 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.229343 | orchestrator | 2025-09-06 00:29:56.229354 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-06 00:29:56.229364 | orchestrator | Saturday 06 September 2025 00:29:46 +0000 (0:00:11.025) 0:06:08.458 **** 2025-09-06 00:29:56.229375 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-06 00:29:56.229386 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-06 00:29:56.229397 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-06 00:29:56.229408 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-06 00:29:56.229418 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-06 00:29:56.229464 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-06 00:29:56.229475 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-06 00:29:56.229485 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-06 00:29:56.229496 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-06 00:29:56.229506 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-06 00:29:56.229517 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-06 00:29:56.229528 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-06 00:29:56.229538 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-06 00:29:56.229549 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-06 00:29:56.229560 | orchestrator | 2025-09-06 00:29:56.229571 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-06 00:29:56.229606 | orchestrator | Saturday 06 September 2025 00:29:47 +0000 (0:00:01.228) 0:06:09.686 **** 2025-09-06 00:29:56.229619 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.229630 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.229641 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.229651 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.229662 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.229673 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.229683 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.229694 | orchestrator | 2025-09-06 00:29:56.229705 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-06 00:29:56.229716 | orchestrator | Saturday 06 September 2025 00:29:48 +0000 (0:00:00.527) 0:06:10.214 **** 2025-09-06 00:29:56.229727 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.229737 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:29:56.229748 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:29:56.229759 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:29:56.229769 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:29:56.229780 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:29:56.229790 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:29:56.229801 | orchestrator | 2025-09-06 00:29:56.229812 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-06 00:29:56.229824 | orchestrator | Saturday 06 September 2025 00:29:51 +0000 (0:00:03.657) 0:06:13.872 **** 2025-09-06 00:29:56.229834 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.229845 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.229856 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.229866 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.229877 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.229888 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.229898 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.229918 | orchestrator | 2025-09-06 00:29:56.229929 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-06 00:29:56.229941 | orchestrator | Saturday 06 September 2025 00:29:52 +0000 (0:00:00.458) 0:06:14.331 **** 2025-09-06 00:29:56.229951 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-06 00:29:56.229963 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-06 00:29:56.229973 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.229984 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-06 00:29:56.229995 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-06 00:29:56.230006 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.230060 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-06 00:29:56.230072 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-06 00:29:56.230083 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.230094 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-06 00:29:56.230104 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-06 00:29:56.230115 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-06 00:29:56.230126 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-06 00:29:56.230136 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.230147 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-06 00:29:56.230157 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-06 00:29:56.230174 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.230185 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.230195 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-06 00:29:56.230206 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-06 00:29:56.230217 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.230227 | orchestrator | 2025-09-06 00:29:56.230238 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-06 00:29:56.230249 | orchestrator | Saturday 06 September 2025 00:29:52 +0000 (0:00:00.646) 0:06:14.977 **** 2025-09-06 00:29:56.230259 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.230270 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.230280 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.230291 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.230301 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.230312 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.230322 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.230333 | orchestrator | 2025-09-06 00:29:56.230344 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-06 00:29:56.230354 | orchestrator | Saturday 06 September 2025 00:29:53 +0000 (0:00:00.477) 0:06:15.455 **** 2025-09-06 00:29:56.230365 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.230375 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.230386 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.230397 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.230407 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.230418 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.230445 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.230456 | orchestrator | 2025-09-06 00:29:56.230467 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-06 00:29:56.230478 | orchestrator | Saturday 06 September 2025 00:29:53 +0000 (0:00:00.462) 0:06:15.917 **** 2025-09-06 00:29:56.230489 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:29:56.230499 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:29:56.230510 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:29:56.230521 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:29:56.230531 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:29:56.230550 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:29:56.230561 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:29:56.230572 | orchestrator | 2025-09-06 00:29:56.230583 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-06 00:29:56.230593 | orchestrator | Saturday 06 September 2025 00:29:54 +0000 (0:00:00.517) 0:06:16.435 **** 2025-09-06 00:29:56.230604 | orchestrator | ok: [testbed-manager] 2025-09-06 00:29:56.230623 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.411015 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.411129 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.411145 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.411157 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.411168 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.411179 | orchestrator | 2025-09-06 00:30:17.411191 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-06 00:30:17.411203 | orchestrator | Saturday 06 September 2025 00:29:56 +0000 (0:00:01.821) 0:06:18.257 **** 2025-09-06 00:30:17.411215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:30:17.411228 | orchestrator | 2025-09-06 00:30:17.411239 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-06 00:30:17.411250 | orchestrator | Saturday 06 September 2025 00:29:57 +0000 (0:00:00.950) 0:06:19.207 **** 2025-09-06 00:30:17.411261 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.411272 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.411283 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.411294 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.411304 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.411315 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.411325 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.411345 | orchestrator | 2025-09-06 00:30:17.411363 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-06 00:30:17.411380 | orchestrator | Saturday 06 September 2025 00:29:57 +0000 (0:00:00.827) 0:06:20.035 **** 2025-09-06 00:30:17.411400 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.411493 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.411507 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.411517 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.411529 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.411542 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.411555 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.411567 | orchestrator | 2025-09-06 00:30:17.411580 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-06 00:30:17.411593 | orchestrator | Saturday 06 September 2025 00:29:58 +0000 (0:00:00.860) 0:06:20.895 **** 2025-09-06 00:30:17.411606 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.411618 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.411630 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.411643 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.411655 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.411668 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.411681 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.411693 | orchestrator | 2025-09-06 00:30:17.411706 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-06 00:30:17.411720 | orchestrator | Saturday 06 September 2025 00:30:00 +0000 (0:00:01.264) 0:06:22.159 **** 2025-09-06 00:30:17.411732 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:17.411745 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.411757 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.411770 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.411783 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.411795 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.411834 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.411847 | orchestrator | 2025-09-06 00:30:17.411860 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-06 00:30:17.411888 | orchestrator | Saturday 06 September 2025 00:30:01 +0000 (0:00:01.560) 0:06:23.719 **** 2025-09-06 00:30:17.411899 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.411910 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.411921 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.411931 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.411942 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.411952 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.411962 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.411973 | orchestrator | 2025-09-06 00:30:17.411983 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-06 00:30:17.411994 | orchestrator | Saturday 06 September 2025 00:30:03 +0000 (0:00:01.374) 0:06:25.094 **** 2025-09-06 00:30:17.412005 | orchestrator | changed: [testbed-manager] 2025-09-06 00:30:17.412015 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.412025 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.412036 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.412046 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.412057 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.412067 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.412078 | orchestrator | 2025-09-06 00:30:17.412088 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-06 00:30:17.412099 | orchestrator | Saturday 06 September 2025 00:30:04 +0000 (0:00:01.426) 0:06:26.521 **** 2025-09-06 00:30:17.412110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:30:17.412121 | orchestrator | 2025-09-06 00:30:17.412132 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-06 00:30:17.412142 | orchestrator | Saturday 06 September 2025 00:30:05 +0000 (0:00:00.847) 0:06:27.368 **** 2025-09-06 00:30:17.412153 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.412164 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.412175 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.412185 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.412196 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.412206 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.412217 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.412227 | orchestrator | 2025-09-06 00:30:17.412238 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-06 00:30:17.412249 | orchestrator | Saturday 06 September 2025 00:30:06 +0000 (0:00:01.265) 0:06:28.634 **** 2025-09-06 00:30:17.412259 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.412270 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.412298 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.412309 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.412320 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.412330 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.412341 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.412351 | orchestrator | 2025-09-06 00:30:17.412362 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-06 00:30:17.412372 | orchestrator | Saturday 06 September 2025 00:30:07 +0000 (0:00:01.018) 0:06:29.653 **** 2025-09-06 00:30:17.412384 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.412403 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.412445 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.412464 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.412483 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.412503 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.412522 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.412540 | orchestrator | 2025-09-06 00:30:17.412554 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-06 00:30:17.412576 | orchestrator | Saturday 06 September 2025 00:30:08 +0000 (0:00:01.065) 0:06:30.719 **** 2025-09-06 00:30:17.412587 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.412598 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.412608 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.412619 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.412629 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:17.412640 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:17.412650 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:17.412661 | orchestrator | 2025-09-06 00:30:17.412671 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-06 00:30:17.412682 | orchestrator | Saturday 06 September 2025 00:30:09 +0000 (0:00:01.078) 0:06:31.797 **** 2025-09-06 00:30:17.412693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:30:17.412704 | orchestrator | 2025-09-06 00:30:17.412715 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412725 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.878) 0:06:32.675 **** 2025-09-06 00:30:17.412736 | orchestrator | 2025-09-06 00:30:17.412746 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412757 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.035) 0:06:32.711 **** 2025-09-06 00:30:17.412767 | orchestrator | 2025-09-06 00:30:17.412778 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412788 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.034) 0:06:32.745 **** 2025-09-06 00:30:17.412799 | orchestrator | 2025-09-06 00:30:17.412809 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412820 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.039) 0:06:32.785 **** 2025-09-06 00:30:17.412830 | orchestrator | 2025-09-06 00:30:17.412840 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412851 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.035) 0:06:32.820 **** 2025-09-06 00:30:17.412861 | orchestrator | 2025-09-06 00:30:17.412872 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412883 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.034) 0:06:32.854 **** 2025-09-06 00:30:17.412893 | orchestrator | 2025-09-06 00:30:17.412904 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-06 00:30:17.412914 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.038) 0:06:32.893 **** 2025-09-06 00:30:17.412925 | orchestrator | 2025-09-06 00:30:17.412936 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-06 00:30:17.412946 | orchestrator | Saturday 06 September 2025 00:30:10 +0000 (0:00:00.034) 0:06:32.928 **** 2025-09-06 00:30:17.412956 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:17.412967 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:17.412977 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:17.412988 | orchestrator | 2025-09-06 00:30:17.412998 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-06 00:30:17.413009 | orchestrator | Saturday 06 September 2025 00:30:12 +0000 (0:00:01.222) 0:06:34.150 **** 2025-09-06 00:30:17.413019 | orchestrator | changed: [testbed-manager] 2025-09-06 00:30:17.413030 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.413040 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.413051 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.413061 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.413072 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.413082 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.413092 | orchestrator | 2025-09-06 00:30:17.413103 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-06 00:30:17.413121 | orchestrator | Saturday 06 September 2025 00:30:13 +0000 (0:00:01.200) 0:06:35.350 **** 2025-09-06 00:30:17.413132 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:17.413142 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.413152 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.413163 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:17.413173 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:17.413183 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:17.413194 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.413205 | orchestrator | 2025-09-06 00:30:17.413215 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-06 00:30:17.413226 | orchestrator | Saturday 06 September 2025 00:30:16 +0000 (0:00:02.987) 0:06:38.338 **** 2025-09-06 00:30:17.413236 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:17.413246 | orchestrator | 2025-09-06 00:30:17.413257 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-06 00:30:17.413268 | orchestrator | Saturday 06 September 2025 00:30:16 +0000 (0:00:00.105) 0:06:38.444 **** 2025-09-06 00:30:17.413278 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:17.413289 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:17.413299 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:17.413309 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:17.413327 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:43.053587 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:43.053750 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:43.053771 | orchestrator | 2025-09-06 00:30:43.053805 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-06 00:30:43.053818 | orchestrator | Saturday 06 September 2025 00:30:17 +0000 (0:00:00.993) 0:06:39.438 **** 2025-09-06 00:30:43.053830 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.053841 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.053852 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.053863 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.053874 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.053885 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.053896 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.053907 | orchestrator | 2025-09-06 00:30:43.053918 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-06 00:30:43.053929 | orchestrator | Saturday 06 September 2025 00:30:17 +0000 (0:00:00.499) 0:06:39.937 **** 2025-09-06 00:30:43.053941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:30:43.053954 | orchestrator | 2025-09-06 00:30:43.053965 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-06 00:30:43.053977 | orchestrator | Saturday 06 September 2025 00:30:18 +0000 (0:00:00.969) 0:06:40.907 **** 2025-09-06 00:30:43.053988 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.054000 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:43.054011 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:43.054085 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:43.054099 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:43.054111 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:43.054124 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:43.054138 | orchestrator | 2025-09-06 00:30:43.054186 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-06 00:30:43.054200 | orchestrator | Saturday 06 September 2025 00:30:19 +0000 (0:00:00.837) 0:06:41.744 **** 2025-09-06 00:30:43.054214 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-06 00:30:43.054227 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-06 00:30:43.054240 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-06 00:30:43.054280 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-06 00:30:43.054293 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-06 00:30:43.054306 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-06 00:30:43.054318 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-06 00:30:43.054331 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-06 00:30:43.054344 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-06 00:30:43.054357 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-06 00:30:43.054370 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-06 00:30:43.054383 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-06 00:30:43.054396 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-06 00:30:43.054441 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-06 00:30:43.054460 | orchestrator | 2025-09-06 00:30:43.054471 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-06 00:30:43.054482 | orchestrator | Saturday 06 September 2025 00:30:22 +0000 (0:00:02.427) 0:06:44.171 **** 2025-09-06 00:30:43.054493 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.054503 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.054514 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.054525 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.054535 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.054546 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.054556 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.054567 | orchestrator | 2025-09-06 00:30:43.054577 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-06 00:30:43.054588 | orchestrator | Saturday 06 September 2025 00:30:22 +0000 (0:00:00.478) 0:06:44.650 **** 2025-09-06 00:30:43.054601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:30:43.054614 | orchestrator | 2025-09-06 00:30:43.054625 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-06 00:30:43.054636 | orchestrator | Saturday 06 September 2025 00:30:23 +0000 (0:00:00.909) 0:06:45.559 **** 2025-09-06 00:30:43.054647 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.054657 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:43.054668 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:43.054679 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:43.054689 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:43.054700 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:43.054710 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:43.054721 | orchestrator | 2025-09-06 00:30:43.054732 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-06 00:30:43.054742 | orchestrator | Saturday 06 September 2025 00:30:24 +0000 (0:00:00.820) 0:06:46.380 **** 2025-09-06 00:30:43.054754 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.054764 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:43.054775 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:43.054786 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:43.054796 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:43.054807 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:43.054817 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:43.054828 | orchestrator | 2025-09-06 00:30:43.054839 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-06 00:30:43.054868 | orchestrator | Saturday 06 September 2025 00:30:25 +0000 (0:00:00.808) 0:06:47.189 **** 2025-09-06 00:30:43.054880 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.054890 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.054901 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.054917 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.054965 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.054988 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.055006 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.055024 | orchestrator | 2025-09-06 00:30:43.055041 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-06 00:30:43.055058 | orchestrator | Saturday 06 September 2025 00:30:25 +0000 (0:00:00.453) 0:06:47.643 **** 2025-09-06 00:30:43.055076 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055094 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:43.055114 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:43.055132 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:43.055149 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:43.055160 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:43.055171 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:43.055181 | orchestrator | 2025-09-06 00:30:43.055192 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-06 00:30:43.055202 | orchestrator | Saturday 06 September 2025 00:30:27 +0000 (0:00:01.669) 0:06:49.313 **** 2025-09-06 00:30:43.055213 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.055224 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.055235 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.055245 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.055256 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.055267 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.055277 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.055288 | orchestrator | 2025-09-06 00:30:43.055299 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-06 00:30:43.055309 | orchestrator | Saturday 06 September 2025 00:30:27 +0000 (0:00:00.531) 0:06:49.844 **** 2025-09-06 00:30:43.055320 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055330 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:43.055341 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:43.055353 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:43.055372 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:43.055422 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:43.055444 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:43.055471 | orchestrator | 2025-09-06 00:30:43.055493 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-06 00:30:43.055512 | orchestrator | Saturday 06 September 2025 00:30:35 +0000 (0:00:07.966) 0:06:57.810 **** 2025-09-06 00:30:43.055531 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055551 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:43.055569 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:43.055588 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:43.055608 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:43.055626 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:43.055638 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:43.055649 | orchestrator | 2025-09-06 00:30:43.055660 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-06 00:30:43.055671 | orchestrator | Saturday 06 September 2025 00:30:37 +0000 (0:00:01.326) 0:06:59.137 **** 2025-09-06 00:30:43.055681 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055692 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:43.055702 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:43.055713 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:43.055723 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:43.055742 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:43.055753 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:43.055763 | orchestrator | 2025-09-06 00:30:43.055774 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-06 00:30:43.055785 | orchestrator | Saturday 06 September 2025 00:30:38 +0000 (0:00:01.639) 0:07:00.777 **** 2025-09-06 00:30:43.055795 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055817 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:30:43.055828 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:30:43.055839 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:30:43.055849 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:30:43.055860 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:30:43.055871 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:30:43.055882 | orchestrator | 2025-09-06 00:30:43.055892 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-06 00:30:43.055903 | orchestrator | Saturday 06 September 2025 00:30:40 +0000 (0:00:01.917) 0:07:02.694 **** 2025-09-06 00:30:43.055914 | orchestrator | ok: [testbed-manager] 2025-09-06 00:30:43.055925 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:30:43.055935 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:30:43.055946 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:30:43.055957 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:30:43.055968 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:30:43.055978 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:30:43.055989 | orchestrator | 2025-09-06 00:30:43.056000 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-06 00:30:43.056011 | orchestrator | Saturday 06 September 2025 00:30:41 +0000 (0:00:00.839) 0:07:03.534 **** 2025-09-06 00:30:43.056021 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.056032 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.056043 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.056053 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.056064 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.056075 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.056085 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.056096 | orchestrator | 2025-09-06 00:30:43.056107 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-06 00:30:43.056117 | orchestrator | Saturday 06 September 2025 00:30:42 +0000 (0:00:01.005) 0:07:04.540 **** 2025-09-06 00:30:43.056128 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:30:43.056139 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:30:43.056149 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:30:43.056159 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:30:43.056170 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:30:43.056181 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:30:43.056192 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:30:43.056202 | orchestrator | 2025-09-06 00:30:43.056224 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-06 00:31:15.128363 | orchestrator | Saturday 06 September 2025 00:30:43 +0000 (0:00:00.536) 0:07:05.077 **** 2025-09-06 00:31:15.128531 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.128548 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.128559 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.128570 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.128580 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.128591 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.128603 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.128614 | orchestrator | 2025-09-06 00:31:15.128626 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-06 00:31:15.128638 | orchestrator | Saturday 06 September 2025 00:30:43 +0000 (0:00:00.560) 0:07:05.637 **** 2025-09-06 00:31:15.128649 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.128660 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.128670 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.128681 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.128692 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.128703 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.128713 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.128724 | orchestrator | 2025-09-06 00:31:15.128735 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-06 00:31:15.128746 | orchestrator | Saturday 06 September 2025 00:30:44 +0000 (0:00:00.521) 0:07:06.159 **** 2025-09-06 00:31:15.128781 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.128793 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.128803 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.128814 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.128824 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.128835 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.128845 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.128856 | orchestrator | 2025-09-06 00:31:15.128866 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-06 00:31:15.128877 | orchestrator | Saturday 06 September 2025 00:30:44 +0000 (0:00:00.497) 0:07:06.656 **** 2025-09-06 00:31:15.128888 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.128900 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.128912 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.128925 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.128937 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.128949 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.128962 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.128974 | orchestrator | 2025-09-06 00:31:15.128988 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-06 00:31:15.129000 | orchestrator | Saturday 06 September 2025 00:30:50 +0000 (0:00:05.790) 0:07:12.447 **** 2025-09-06 00:31:15.129013 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:31:15.129025 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:31:15.129038 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:31:15.129051 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:31:15.129064 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:31:15.129075 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:31:15.129087 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:31:15.129100 | orchestrator | 2025-09-06 00:31:15.129113 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-06 00:31:15.129126 | orchestrator | Saturday 06 September 2025 00:30:50 +0000 (0:00:00.497) 0:07:12.945 **** 2025-09-06 00:31:15.129156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:15.129171 | orchestrator | 2025-09-06 00:31:15.129184 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-06 00:31:15.129197 | orchestrator | Saturday 06 September 2025 00:30:51 +0000 (0:00:00.733) 0:07:13.679 **** 2025-09-06 00:31:15.129209 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.129221 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.129234 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.129246 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.129258 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.129268 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.129278 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.129289 | orchestrator | 2025-09-06 00:31:15.129300 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-06 00:31:15.129310 | orchestrator | Saturday 06 September 2025 00:30:53 +0000 (0:00:02.178) 0:07:15.857 **** 2025-09-06 00:31:15.129321 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.129331 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.129342 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.129352 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.129363 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.129373 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.129384 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.129458 | orchestrator | 2025-09-06 00:31:15.129470 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-06 00:31:15.129480 | orchestrator | Saturday 06 September 2025 00:30:54 +0000 (0:00:01.102) 0:07:16.959 **** 2025-09-06 00:31:15.129491 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.129502 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.129521 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.129532 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.129542 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.129553 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.129563 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.129574 | orchestrator | 2025-09-06 00:31:15.129585 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-06 00:31:15.129595 | orchestrator | Saturday 06 September 2025 00:30:55 +0000 (0:00:00.834) 0:07:17.794 **** 2025-09-06 00:31:15.129607 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129620 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129631 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129659 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129671 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129681 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129692 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-06 00:31:15.129703 | orchestrator | 2025-09-06 00:31:15.129714 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-06 00:31:15.129725 | orchestrator | Saturday 06 September 2025 00:30:57 +0000 (0:00:01.643) 0:07:19.438 **** 2025-09-06 00:31:15.129736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:15.129747 | orchestrator | 2025-09-06 00:31:15.129758 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-06 00:31:15.129769 | orchestrator | Saturday 06 September 2025 00:30:58 +0000 (0:00:01.024) 0:07:20.462 **** 2025-09-06 00:31:15.129780 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:15.129790 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:15.129801 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:15.129812 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:15.129822 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:15.129833 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:15.129843 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:15.129854 | orchestrator | 2025-09-06 00:31:15.129865 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-06 00:31:15.129875 | orchestrator | Saturday 06 September 2025 00:31:07 +0000 (0:00:08.914) 0:07:29.377 **** 2025-09-06 00:31:15.129886 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.129897 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.129907 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.129918 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.129929 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.129939 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.129950 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.129961 | orchestrator | 2025-09-06 00:31:15.129971 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-06 00:31:15.129983 | orchestrator | Saturday 06 September 2025 00:31:09 +0000 (0:00:01.896) 0:07:31.274 **** 2025-09-06 00:31:15.129993 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.130004 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.130082 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.130096 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.130107 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.130117 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.130128 | orchestrator | 2025-09-06 00:31:15.130139 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-06 00:31:15.130156 | orchestrator | Saturday 06 September 2025 00:31:10 +0000 (0:00:01.302) 0:07:32.577 **** 2025-09-06 00:31:15.130167 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:15.130178 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:15.130189 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:15.130200 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:15.130210 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:15.130221 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:15.130232 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:15.130243 | orchestrator | 2025-09-06 00:31:15.130253 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-06 00:31:15.130264 | orchestrator | 2025-09-06 00:31:15.130275 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-06 00:31:15.130286 | orchestrator | Saturday 06 September 2025 00:31:11 +0000 (0:00:01.199) 0:07:33.776 **** 2025-09-06 00:31:15.130296 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:31:15.130307 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:31:15.130318 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:31:15.130329 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:31:15.130339 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:31:15.130350 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:31:15.130361 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:31:15.130371 | orchestrator | 2025-09-06 00:31:15.130382 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-06 00:31:15.130415 | orchestrator | 2025-09-06 00:31:15.130426 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-06 00:31:15.130437 | orchestrator | Saturday 06 September 2025 00:31:12 +0000 (0:00:00.504) 0:07:34.281 **** 2025-09-06 00:31:15.130448 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:15.130458 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:15.130469 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:15.130480 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:15.130491 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:15.130501 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:15.130512 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:15.130523 | orchestrator | 2025-09-06 00:31:15.130534 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-06 00:31:15.130544 | orchestrator | Saturday 06 September 2025 00:31:13 +0000 (0:00:01.303) 0:07:35.584 **** 2025-09-06 00:31:15.130555 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:15.130566 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:15.130577 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:15.130587 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:15.130598 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:15.130609 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:15.130620 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:15.130630 | orchestrator | 2025-09-06 00:31:15.130641 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-06 00:31:15.130660 | orchestrator | Saturday 06 September 2025 00:31:15 +0000 (0:00:01.566) 0:07:37.151 **** 2025-09-06 00:31:38.514000 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:31:38.514119 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:31:38.514126 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:31:38.514131 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:31:38.514136 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:31:38.514140 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:31:38.514144 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:31:38.514148 | orchestrator | 2025-09-06 00:31:38.514169 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-06 00:31:38.514174 | orchestrator | Saturday 06 September 2025 00:31:15 +0000 (0:00:00.457) 0:07:37.609 **** 2025-09-06 00:31:38.514178 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:38.514184 | orchestrator | 2025-09-06 00:31:38.514188 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-06 00:31:38.514192 | orchestrator | Saturday 06 September 2025 00:31:16 +0000 (0:00:00.896) 0:07:38.505 **** 2025-09-06 00:31:38.514198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:38.514204 | orchestrator | 2025-09-06 00:31:38.514208 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-06 00:31:38.514211 | orchestrator | Saturday 06 September 2025 00:31:17 +0000 (0:00:00.738) 0:07:39.244 **** 2025-09-06 00:31:38.514215 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514219 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514223 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514227 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514230 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514234 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514238 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514241 | orchestrator | 2025-09-06 00:31:38.514245 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-06 00:31:38.514249 | orchestrator | Saturday 06 September 2025 00:31:25 +0000 (0:00:08.186) 0:07:47.431 **** 2025-09-06 00:31:38.514253 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514256 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514260 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514264 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514267 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514271 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514275 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514278 | orchestrator | 2025-09-06 00:31:38.514282 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-06 00:31:38.514286 | orchestrator | Saturday 06 September 2025 00:31:26 +0000 (0:00:00.815) 0:07:48.246 **** 2025-09-06 00:31:38.514290 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514293 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514297 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514301 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514305 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514308 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514313 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514316 | orchestrator | 2025-09-06 00:31:38.514320 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-06 00:31:38.514324 | orchestrator | Saturday 06 September 2025 00:31:27 +0000 (0:00:01.518) 0:07:49.765 **** 2025-09-06 00:31:38.514328 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514332 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514336 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514340 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514343 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514347 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514351 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514355 | orchestrator | 2025-09-06 00:31:38.514359 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-06 00:31:38.514362 | orchestrator | Saturday 06 September 2025 00:31:30 +0000 (0:00:02.285) 0:07:52.051 **** 2025-09-06 00:31:38.514366 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514373 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514409 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514413 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514416 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514420 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514424 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514427 | orchestrator | 2025-09-06 00:31:38.514431 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-06 00:31:38.514435 | orchestrator | Saturday 06 September 2025 00:31:31 +0000 (0:00:01.234) 0:07:53.286 **** 2025-09-06 00:31:38.514439 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514443 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514446 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514450 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514454 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514457 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514461 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514465 | orchestrator | 2025-09-06 00:31:38.514469 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-06 00:31:38.514472 | orchestrator | 2025-09-06 00:31:38.514476 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-06 00:31:38.514480 | orchestrator | Saturday 06 September 2025 00:31:32 +0000 (0:00:01.441) 0:07:54.727 **** 2025-09-06 00:31:38.514484 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:38.514488 | orchestrator | 2025-09-06 00:31:38.514492 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-06 00:31:38.514505 | orchestrator | Saturday 06 September 2025 00:31:33 +0000 (0:00:00.787) 0:07:55.515 **** 2025-09-06 00:31:38.514510 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:38.514515 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:38.514519 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:38.514522 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:38.514526 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:38.514530 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:38.514534 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:38.514537 | orchestrator | 2025-09-06 00:31:38.514541 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-06 00:31:38.514545 | orchestrator | Saturday 06 September 2025 00:31:34 +0000 (0:00:00.823) 0:07:56.339 **** 2025-09-06 00:31:38.514549 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514553 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514556 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514560 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514564 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514567 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514571 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514575 | orchestrator | 2025-09-06 00:31:38.514579 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-06 00:31:38.514582 | orchestrator | Saturday 06 September 2025 00:31:35 +0000 (0:00:01.305) 0:07:57.644 **** 2025-09-06 00:31:38.514619 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:31:38.514624 | orchestrator | 2025-09-06 00:31:38.514627 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-06 00:31:38.514631 | orchestrator | Saturday 06 September 2025 00:31:36 +0000 (0:00:00.816) 0:07:58.461 **** 2025-09-06 00:31:38.514635 | orchestrator | ok: [testbed-manager] 2025-09-06 00:31:38.514639 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:31:38.514642 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:31:38.514646 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:31:38.514650 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:31:38.514657 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:31:38.514661 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:31:38.514664 | orchestrator | 2025-09-06 00:31:38.514668 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-06 00:31:38.514672 | orchestrator | Saturday 06 September 2025 00:31:37 +0000 (0:00:00.821) 0:07:59.282 **** 2025-09-06 00:31:38.514676 | orchestrator | changed: [testbed-manager] 2025-09-06 00:31:38.514679 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:31:38.514683 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:31:38.514687 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:31:38.514690 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:31:38.514694 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:31:38.514698 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:31:38.514701 | orchestrator | 2025-09-06 00:31:38.514705 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:31:38.514710 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-06 00:31:38.514715 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-06 00:31:38.514721 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-06 00:31:38.514725 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-06 00:31:38.514729 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-06 00:31:38.514732 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-06 00:31:38.514736 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-06 00:31:38.514740 | orchestrator | 2025-09-06 00:31:38.514744 | orchestrator | 2025-09-06 00:31:38.514747 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:31:38.514751 | orchestrator | Saturday 06 September 2025 00:31:38 +0000 (0:00:01.244) 0:08:00.527 **** 2025-09-06 00:31:38.514755 | orchestrator | =============================================================================== 2025-09-06 00:31:38.514759 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.16s 2025-09-06 00:31:38.514763 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.02s 2025-09-06 00:31:38.514766 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.99s 2025-09-06 00:31:38.514770 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.52s 2025-09-06 00:31:38.514774 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.21s 2025-09-06 00:31:38.514778 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.03s 2025-09-06 00:31:38.514781 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.45s 2025-09-06 00:31:38.514786 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.34s 2025-09-06 00:31:38.514789 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.91s 2025-09-06 00:31:38.514793 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.87s 2025-09-06 00:31:38.514799 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.27s 2025-09-06 00:31:38.897099 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.19s 2025-09-06 00:31:38.897195 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.19s 2025-09-06 00:31:38.897234 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.97s 2025-09-06 00:31:38.897244 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.97s 2025-09-06 00:31:38.897254 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.86s 2025-09-06 00:31:38.897263 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.52s 2025-09-06 00:31:38.897273 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.38s 2025-09-06 00:31:38.897282 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.22s 2025-09-06 00:31:38.897292 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.13s 2025-09-06 00:31:39.153961 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-06 00:31:39.154106 | orchestrator | + osism apply network 2025-09-06 00:31:51.593833 | orchestrator | 2025-09-06 00:31:51 | INFO  | Task 33adf1ff-d900-4c6b-839f-96ca8802dc24 (network) was prepared for execution. 2025-09-06 00:31:51.593935 | orchestrator | 2025-09-06 00:31:51 | INFO  | It takes a moment until task 33adf1ff-d900-4c6b-839f-96ca8802dc24 (network) has been started and output is visible here. 2025-09-06 00:32:18.809777 | orchestrator | 2025-09-06 00:32:18.809896 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-06 00:32:18.809914 | orchestrator | 2025-09-06 00:32:18.809926 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-06 00:32:18.809938 | orchestrator | Saturday 06 September 2025 00:31:55 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-06 00:32:18.809949 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.809961 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.809972 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.809984 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.809995 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.810006 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.810073 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.810088 | orchestrator | 2025-09-06 00:32:18.810100 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-06 00:32:18.810111 | orchestrator | Saturday 06 September 2025 00:31:56 +0000 (0:00:00.543) 0:00:00.803 **** 2025-09-06 00:32:18.810125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:32:18.810138 | orchestrator | 2025-09-06 00:32:18.810150 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-06 00:32:18.810171 | orchestrator | Saturday 06 September 2025 00:31:57 +0000 (0:00:01.014) 0:00:01.818 **** 2025-09-06 00:32:18.810182 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.810194 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.810205 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.810216 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.810227 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.810238 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.810249 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.810259 | orchestrator | 2025-09-06 00:32:18.810271 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-06 00:32:18.810282 | orchestrator | Saturday 06 September 2025 00:31:59 +0000 (0:00:01.907) 0:00:03.725 **** 2025-09-06 00:32:18.810293 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.810303 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.810314 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.810325 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.810336 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.810346 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.810384 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.810396 | orchestrator | 2025-09-06 00:32:18.810407 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-06 00:32:18.810443 | orchestrator | Saturday 06 September 2025 00:32:00 +0000 (0:00:01.652) 0:00:05.378 **** 2025-09-06 00:32:18.810455 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-06 00:32:18.810466 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-06 00:32:18.810477 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-06 00:32:18.810488 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-06 00:32:18.810499 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-06 00:32:18.810509 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-06 00:32:18.810520 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-06 00:32:18.810531 | orchestrator | 2025-09-06 00:32:18.810542 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-06 00:32:18.810552 | orchestrator | Saturday 06 September 2025 00:32:01 +0000 (0:00:00.986) 0:00:06.364 **** 2025-09-06 00:32:18.810563 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-06 00:32:18.810574 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 00:32:18.810585 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:32:18.810596 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 00:32:18.810606 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-06 00:32:18.810617 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-06 00:32:18.810628 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-06 00:32:18.810638 | orchestrator | 2025-09-06 00:32:18.810649 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-06 00:32:18.810660 | orchestrator | Saturday 06 September 2025 00:32:04 +0000 (0:00:03.165) 0:00:09.530 **** 2025-09-06 00:32:18.810671 | orchestrator | changed: [testbed-manager] 2025-09-06 00:32:18.810682 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:32:18.810692 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:32:18.810703 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:32:18.810714 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:32:18.810724 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:32:18.810735 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:32:18.810745 | orchestrator | 2025-09-06 00:32:18.810756 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-06 00:32:18.810767 | orchestrator | Saturday 06 September 2025 00:32:06 +0000 (0:00:01.446) 0:00:10.977 **** 2025-09-06 00:32:18.810777 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:32:18.810788 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 00:32:18.810798 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-06 00:32:18.810809 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-06 00:32:18.810820 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 00:32:18.810830 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-06 00:32:18.810841 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-06 00:32:18.810851 | orchestrator | 2025-09-06 00:32:18.810862 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-06 00:32:18.810873 | orchestrator | Saturday 06 September 2025 00:32:08 +0000 (0:00:01.808) 0:00:12.785 **** 2025-09-06 00:32:18.810884 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.810894 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.810905 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.810916 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.810926 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.810937 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.810947 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.810958 | orchestrator | 2025-09-06 00:32:18.810969 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-06 00:32:18.810996 | orchestrator | Saturday 06 September 2025 00:32:09 +0000 (0:00:01.130) 0:00:13.915 **** 2025-09-06 00:32:18.811008 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:32:18.811019 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:18.811029 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:18.811050 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:18.811061 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:18.811072 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:18.811082 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:18.811093 | orchestrator | 2025-09-06 00:32:18.811104 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-06 00:32:18.811115 | orchestrator | Saturday 06 September 2025 00:32:09 +0000 (0:00:00.638) 0:00:14.554 **** 2025-09-06 00:32:18.811125 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.811136 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.811147 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.811157 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.811168 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.811178 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.811189 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.811200 | orchestrator | 2025-09-06 00:32:18.811210 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-06 00:32:18.811221 | orchestrator | Saturday 06 September 2025 00:32:12 +0000 (0:00:02.089) 0:00:16.643 **** 2025-09-06 00:32:18.811232 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:18.811243 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:18.811253 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:18.811264 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:18.811275 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:18.811285 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:18.811310 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-06 00:32:18.811322 | orchestrator | 2025-09-06 00:32:18.811333 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-06 00:32:18.811345 | orchestrator | Saturday 06 September 2025 00:32:12 +0000 (0:00:00.844) 0:00:17.487 **** 2025-09-06 00:32:18.811355 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.811383 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:32:18.811393 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:32:18.811404 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:32:18.811415 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:32:18.811425 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:32:18.811436 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:32:18.811447 | orchestrator | 2025-09-06 00:32:18.811458 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-06 00:32:18.811468 | orchestrator | Saturday 06 September 2025 00:32:14 +0000 (0:00:01.611) 0:00:19.099 **** 2025-09-06 00:32:18.811480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:32:18.811492 | orchestrator | 2025-09-06 00:32:18.811503 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-06 00:32:18.811514 | orchestrator | Saturday 06 September 2025 00:32:15 +0000 (0:00:01.241) 0:00:20.340 **** 2025-09-06 00:32:18.811525 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.811536 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.811546 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.811557 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.811568 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.811579 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.811589 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.811600 | orchestrator | 2025-09-06 00:32:18.811611 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-06 00:32:18.811621 | orchestrator | Saturday 06 September 2025 00:32:16 +0000 (0:00:00.951) 0:00:21.292 **** 2025-09-06 00:32:18.811632 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:18.811643 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:18.811654 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:18.811672 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:18.811682 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:18.811693 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:18.811704 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:18.811714 | orchestrator | 2025-09-06 00:32:18.811725 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-06 00:32:18.811736 | orchestrator | Saturday 06 September 2025 00:32:17 +0000 (0:00:00.785) 0:00:22.077 **** 2025-09-06 00:32:18.811747 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811758 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811768 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811779 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811790 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811800 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811811 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811822 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811832 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811843 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-06 00:32:18.811854 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811864 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811875 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811886 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-06 00:32:18.811897 | orchestrator | 2025-09-06 00:32:18.811915 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-06 00:32:34.510327 | orchestrator | Saturday 06 September 2025 00:32:18 +0000 (0:00:01.320) 0:00:23.398 **** 2025-09-06 00:32:34.510475 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:32:34.510490 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:34.510500 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:34.510510 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:34.510520 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:34.510530 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:34.510540 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:34.510550 | orchestrator | 2025-09-06 00:32:34.510561 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-06 00:32:34.510571 | orchestrator | Saturday 06 September 2025 00:32:19 +0000 (0:00:00.650) 0:00:24.048 **** 2025-09-06 00:32:34.510582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-3, testbed-node-1, testbed-node-4, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-5 2025-09-06 00:32:34.510594 | orchestrator | 2025-09-06 00:32:34.510604 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-06 00:32:34.510614 | orchestrator | Saturday 06 September 2025 00:32:23 +0000 (0:00:04.420) 0:00:28.469 **** 2025-09-06 00:32:34.510640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510651 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510713 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510824 | orchestrator | 2025-09-06 00:32:34.510834 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-06 00:32:34.510844 | orchestrator | Saturday 06 September 2025 00:32:28 +0000 (0:00:05.033) 0:00:33.502 **** 2025-09-06 00:32:34.510854 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510876 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-06 00:32:34.510958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.510994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:34.511016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:40.521995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-06 00:32:40.522159 | orchestrator | 2025-09-06 00:32:40.522176 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-06 00:32:40.522189 | orchestrator | Saturday 06 September 2025 00:32:34 +0000 (0:00:05.583) 0:00:39.086 **** 2025-09-06 00:32:40.522226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:32:40.522239 | orchestrator | 2025-09-06 00:32:40.522250 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-06 00:32:40.522260 | orchestrator | Saturday 06 September 2025 00:32:35 +0000 (0:00:01.110) 0:00:40.197 **** 2025-09-06 00:32:40.522272 | orchestrator | ok: [testbed-manager] 2025-09-06 00:32:40.522284 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:32:40.522295 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:32:40.522305 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:32:40.522316 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:32:40.522326 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:32:40.522338 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:32:40.522378 | orchestrator | 2025-09-06 00:32:40.522390 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-06 00:32:40.522401 | orchestrator | Saturday 06 September 2025 00:32:36 +0000 (0:00:01.149) 0:00:41.346 **** 2025-09-06 00:32:40.522412 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522424 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522435 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522445 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522456 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:32:40.522467 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522478 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522488 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522499 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522510 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522520 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522531 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522544 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522557 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:40.522570 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522582 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522594 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522606 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522618 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:40.522631 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522644 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522656 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522669 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522682 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:40.522712 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522725 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522737 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522757 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522771 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:40.522784 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:40.522795 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-06 00:32:40.522806 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-06 00:32:40.522817 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-06 00:32:40.522827 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-06 00:32:40.522838 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:40.522849 | orchestrator | 2025-09-06 00:32:40.522860 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-06 00:32:40.522886 | orchestrator | Saturday 06 September 2025 00:32:38 +0000 (0:00:01.957) 0:00:43.304 **** 2025-09-06 00:32:40.522897 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:32:40.522908 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:40.522919 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:40.522930 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:40.522941 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:40.522951 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:40.522962 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:40.522973 | orchestrator | 2025-09-06 00:32:40.522984 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-06 00:32:40.522994 | orchestrator | Saturday 06 September 2025 00:32:39 +0000 (0:00:00.635) 0:00:43.939 **** 2025-09-06 00:32:40.523005 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:32:40.523016 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:32:40.523026 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:32:40.523037 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:32:40.523047 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:32:40.523058 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:32:40.523068 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:32:40.523079 | orchestrator | 2025-09-06 00:32:40.523090 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:32:40.523107 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 00:32:40.523121 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523132 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523143 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523154 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523164 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523175 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 00:32:40.523186 | orchestrator | 2025-09-06 00:32:40.523196 | orchestrator | 2025-09-06 00:32:40.523207 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:32:40.523218 | orchestrator | Saturday 06 September 2025 00:32:40 +0000 (0:00:00.749) 0:00:44.688 **** 2025-09-06 00:32:40.523229 | orchestrator | =============================================================================== 2025-09-06 00:32:40.523246 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.58s 2025-09-06 00:32:40.523257 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.03s 2025-09-06 00:32:40.523268 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.42s 2025-09-06 00:32:40.523279 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.17s 2025-09-06 00:32:40.523290 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-09-06 00:32:40.523300 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.96s 2025-09-06 00:32:40.523311 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.91s 2025-09-06 00:32:40.523322 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2025-09-06 00:32:40.523332 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.65s 2025-09-06 00:32:40.523343 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2025-09-06 00:32:40.523371 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-09-06 00:32:40.523382 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2025-09-06 00:32:40.523393 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2025-09-06 00:32:40.523404 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2025-09-06 00:32:40.523415 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-09-06 00:32:40.523426 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2025-09-06 00:32:40.523437 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.01s 2025-09-06 00:32:40.523448 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-09-06 00:32:40.523458 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-09-06 00:32:40.523469 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.84s 2025-09-06 00:32:40.786214 | orchestrator | + osism apply wireguard 2025-09-06 00:32:52.756961 | orchestrator | 2025-09-06 00:32:52 | INFO  | Task e8eb8939-f0d7-47ac-852b-b66287a0137d (wireguard) was prepared for execution. 2025-09-06 00:32:52.757069 | orchestrator | 2025-09-06 00:32:52 | INFO  | It takes a moment until task e8eb8939-f0d7-47ac-852b-b66287a0137d (wireguard) has been started and output is visible here. 2025-09-06 00:33:11.086616 | orchestrator | 2025-09-06 00:33:11.086742 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-06 00:33:11.086758 | orchestrator | 2025-09-06 00:33:11.086770 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-06 00:33:11.086781 | orchestrator | Saturday 06 September 2025 00:32:56 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-06 00:33:11.086793 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:11.086804 | orchestrator | 2025-09-06 00:33:11.086815 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-06 00:33:11.086826 | orchestrator | Saturday 06 September 2025 00:32:57 +0000 (0:00:01.193) 0:00:01.361 **** 2025-09-06 00:33:11.086837 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.086849 | orchestrator | 2025-09-06 00:33:11.086859 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-06 00:33:11.086870 | orchestrator | Saturday 06 September 2025 00:33:03 +0000 (0:00:05.913) 0:00:07.274 **** 2025-09-06 00:33:11.086881 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.086892 | orchestrator | 2025-09-06 00:33:11.086903 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-06 00:33:11.086914 | orchestrator | Saturday 06 September 2025 00:33:04 +0000 (0:00:00.560) 0:00:07.835 **** 2025-09-06 00:33:11.086925 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.086962 | orchestrator | 2025-09-06 00:33:11.086990 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-06 00:33:11.087002 | orchestrator | Saturday 06 September 2025 00:33:04 +0000 (0:00:00.429) 0:00:08.264 **** 2025-09-06 00:33:11.087013 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:11.087024 | orchestrator | 2025-09-06 00:33:11.087035 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-06 00:33:11.087046 | orchestrator | Saturday 06 September 2025 00:33:05 +0000 (0:00:00.510) 0:00:08.774 **** 2025-09-06 00:33:11.087056 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:11.087067 | orchestrator | 2025-09-06 00:33:11.087078 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-06 00:33:11.087089 | orchestrator | Saturday 06 September 2025 00:33:05 +0000 (0:00:00.518) 0:00:09.292 **** 2025-09-06 00:33:11.087100 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:11.087110 | orchestrator | 2025-09-06 00:33:11.087121 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-06 00:33:11.087132 | orchestrator | Saturday 06 September 2025 00:33:05 +0000 (0:00:00.420) 0:00:09.713 **** 2025-09-06 00:33:11.087143 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.087155 | orchestrator | 2025-09-06 00:33:11.087168 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-06 00:33:11.087181 | orchestrator | Saturday 06 September 2025 00:33:07 +0000 (0:00:01.130) 0:00:10.843 **** 2025-09-06 00:33:11.087194 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-06 00:33:11.087207 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.087219 | orchestrator | 2025-09-06 00:33:11.087233 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-06 00:33:11.087246 | orchestrator | Saturday 06 September 2025 00:33:08 +0000 (0:00:00.905) 0:00:11.749 **** 2025-09-06 00:33:11.087259 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.087272 | orchestrator | 2025-09-06 00:33:11.087284 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-06 00:33:11.087296 | orchestrator | Saturday 06 September 2025 00:33:09 +0000 (0:00:01.730) 0:00:13.479 **** 2025-09-06 00:33:11.087309 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:11.087321 | orchestrator | 2025-09-06 00:33:11.087356 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:33:11.087370 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:33:11.087384 | orchestrator | 2025-09-06 00:33:11.087397 | orchestrator | 2025-09-06 00:33:11.087409 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:33:11.087422 | orchestrator | Saturday 06 September 2025 00:33:10 +0000 (0:00:00.946) 0:00:14.426 **** 2025-09-06 00:33:11.087436 | orchestrator | =============================================================================== 2025-09-06 00:33:11.087448 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.91s 2025-09-06 00:33:11.087461 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-09-06 00:33:11.087474 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2025-09-06 00:33:11.087488 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2025-09-06 00:33:11.087501 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2025-09-06 00:33:11.087512 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-09-06 00:33:11.087523 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-06 00:33:11.087534 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-09-06 00:33:11.087544 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-09-06 00:33:11.087555 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-09-06 00:33:11.087574 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-06 00:33:11.407932 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-06 00:33:11.444895 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-06 00:33:11.444969 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-06 00:33:11.517864 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 191 2025-09-06 00:33:11.532680 | orchestrator | + osism apply --environment custom workarounds 2025-09-06 00:33:13.485747 | orchestrator | 2025-09-06 00:33:13 | INFO  | Trying to run play workarounds in environment custom 2025-09-06 00:33:23.602806 | orchestrator | 2025-09-06 00:33:23 | INFO  | Task f91145b3-6440-415a-8056-c033b5d7ae83 (workarounds) was prepared for execution. 2025-09-06 00:33:23.602918 | orchestrator | 2025-09-06 00:33:23 | INFO  | It takes a moment until task f91145b3-6440-415a-8056-c033b5d7ae83 (workarounds) has been started and output is visible here. 2025-09-06 00:33:48.801278 | orchestrator | 2025-09-06 00:33:48.801433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:33:48.801453 | orchestrator | 2025-09-06 00:33:48.801464 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-06 00:33:48.801476 | orchestrator | Saturday 06 September 2025 00:33:27 +0000 (0:00:00.128) 0:00:00.128 **** 2025-09-06 00:33:48.801487 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801499 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801518 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801530 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801540 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801551 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801562 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-06 00:33:48.801573 | orchestrator | 2025-09-06 00:33:48.801583 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-06 00:33:48.801594 | orchestrator | 2025-09-06 00:33:48.801605 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-06 00:33:48.801616 | orchestrator | Saturday 06 September 2025 00:33:28 +0000 (0:00:00.678) 0:00:00.807 **** 2025-09-06 00:33:48.801627 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:48.801639 | orchestrator | 2025-09-06 00:33:48.801650 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-06 00:33:48.801661 | orchestrator | 2025-09-06 00:33:48.801671 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-06 00:33:48.801682 | orchestrator | Saturday 06 September 2025 00:33:30 +0000 (0:00:02.080) 0:00:02.887 **** 2025-09-06 00:33:48.801693 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:33:48.801704 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:33:48.801714 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:33:48.801725 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:33:48.801736 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:33:48.801746 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:33:48.801757 | orchestrator | 2025-09-06 00:33:48.801768 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-06 00:33:48.801779 | orchestrator | 2025-09-06 00:33:48.801790 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-06 00:33:48.801802 | orchestrator | Saturday 06 September 2025 00:33:32 +0000 (0:00:01.827) 0:00:04.715 **** 2025-09-06 00:33:48.801816 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801832 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801863 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801875 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801887 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801900 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-06 00:33:48.801912 | orchestrator | 2025-09-06 00:33:48.801925 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-06 00:33:48.801937 | orchestrator | Saturday 06 September 2025 00:33:33 +0000 (0:00:01.589) 0:00:06.304 **** 2025-09-06 00:33:48.801949 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:33:48.801962 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:33:48.801974 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:33:48.801986 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:33:48.801998 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:33:48.802010 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:33:48.802121 | orchestrator | 2025-09-06 00:33:48.802147 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-06 00:33:48.802174 | orchestrator | Saturday 06 September 2025 00:33:37 +0000 (0:00:04.022) 0:00:10.327 **** 2025-09-06 00:33:48.802196 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:33:48.802215 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:33:48.802233 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:33:48.802252 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:33:48.802271 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:33:48.802292 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:33:48.802354 | orchestrator | 2025-09-06 00:33:48.802367 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-06 00:33:48.802378 | orchestrator | 2025-09-06 00:33:48.802389 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-06 00:33:48.802400 | orchestrator | Saturday 06 September 2025 00:33:38 +0000 (0:00:00.657) 0:00:10.984 **** 2025-09-06 00:33:48.802410 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:48.802421 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:33:48.802432 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:33:48.802442 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:33:48.802453 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:33:48.802463 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:33:48.802474 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:33:48.802484 | orchestrator | 2025-09-06 00:33:48.802495 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-06 00:33:48.802505 | orchestrator | Saturday 06 September 2025 00:33:40 +0000 (0:00:01.870) 0:00:12.855 **** 2025-09-06 00:33:48.802516 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:48.802527 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:33:48.802537 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:33:48.802548 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:33:48.802558 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:33:48.802569 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:33:48.802600 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:33:48.802611 | orchestrator | 2025-09-06 00:33:48.802622 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-06 00:33:48.802633 | orchestrator | Saturday 06 September 2025 00:33:41 +0000 (0:00:01.600) 0:00:14.456 **** 2025-09-06 00:33:48.802644 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:33:48.802654 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:33:48.802665 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:33:48.802675 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:48.802686 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:33:48.802696 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:33:48.802719 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:33:48.802730 | orchestrator | 2025-09-06 00:33:48.802747 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-06 00:33:48.802758 | orchestrator | Saturday 06 September 2025 00:33:43 +0000 (0:00:01.590) 0:00:16.046 **** 2025-09-06 00:33:48.802769 | orchestrator | changed: [testbed-manager] 2025-09-06 00:33:48.802780 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:33:48.802799 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:33:48.802817 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:33:48.802835 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:33:48.802853 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:33:48.802881 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:33:48.802900 | orchestrator | 2025-09-06 00:33:48.802918 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-06 00:33:48.802939 | orchestrator | Saturday 06 September 2025 00:33:45 +0000 (0:00:01.900) 0:00:17.947 **** 2025-09-06 00:33:48.802957 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:33:48.802974 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:33:48.802985 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:33:48.802995 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:33:48.803006 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:33:48.803016 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:33:48.803026 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:33:48.803037 | orchestrator | 2025-09-06 00:33:48.803047 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-06 00:33:48.803058 | orchestrator | 2025-09-06 00:33:48.803068 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-06 00:33:48.803079 | orchestrator | Saturday 06 September 2025 00:33:45 +0000 (0:00:00.587) 0:00:18.534 **** 2025-09-06 00:33:48.803089 | orchestrator | ok: [testbed-manager] 2025-09-06 00:33:48.803100 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:33:48.803110 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:33:48.803121 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:33:48.803131 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:33:48.803142 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:33:48.803152 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:33:48.803162 | orchestrator | 2025-09-06 00:33:48.803173 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:33:48.803185 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:33:48.803198 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803209 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803219 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803230 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803240 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803251 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:33:48.803261 | orchestrator | 2025-09-06 00:33:48.803272 | orchestrator | 2025-09-06 00:33:48.803283 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:33:48.803293 | orchestrator | Saturday 06 September 2025 00:33:48 +0000 (0:00:02.900) 0:00:21.435 **** 2025-09-06 00:33:48.803365 | orchestrator | =============================================================================== 2025-09-06 00:33:48.803379 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.02s 2025-09-06 00:33:48.803389 | orchestrator | Install python3-docker -------------------------------------------------- 2.90s 2025-09-06 00:33:48.803400 | orchestrator | Apply netplan configuration --------------------------------------------- 2.08s 2025-09-06 00:33:48.803411 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.90s 2025-09-06 00:33:48.803421 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.87s 2025-09-06 00:33:48.803432 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-09-06 00:33:48.803443 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-09-06 00:33:48.803453 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2025-09-06 00:33:48.803464 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2025-09-06 00:33:48.803474 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.68s 2025-09-06 00:33:48.803485 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-09-06 00:33:48.803506 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-09-06 00:33:49.350281 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-06 00:34:01.316880 | orchestrator | 2025-09-06 00:34:01 | INFO  | Task 91b69f41-c95e-496a-aba8-0c0619dc4ea4 (reboot) was prepared for execution. 2025-09-06 00:34:01.316996 | orchestrator | 2025-09-06 00:34:01 | INFO  | It takes a moment until task 91b69f41-c95e-496a-aba8-0c0619dc4ea4 (reboot) has been started and output is visible here. 2025-09-06 00:34:11.090965 | orchestrator | 2025-09-06 00:34:11.091077 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091094 | orchestrator | 2025-09-06 00:34:11.091106 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091118 | orchestrator | Saturday 06 September 2025 00:34:05 +0000 (0:00:00.162) 0:00:00.162 **** 2025-09-06 00:34:11.091129 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:34:11.091141 | orchestrator | 2025-09-06 00:34:11.091152 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.091163 | orchestrator | Saturday 06 September 2025 00:34:05 +0000 (0:00:00.083) 0:00:00.246 **** 2025-09-06 00:34:11.091174 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:34:11.091185 | orchestrator | 2025-09-06 00:34:11.091196 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.091207 | orchestrator | Saturday 06 September 2025 00:34:06 +0000 (0:00:00.945) 0:00:01.192 **** 2025-09-06 00:34:11.091218 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:34:11.091228 | orchestrator | 2025-09-06 00:34:11.091240 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091251 | orchestrator | 2025-09-06 00:34:11.091261 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091273 | orchestrator | Saturday 06 September 2025 00:34:06 +0000 (0:00:00.110) 0:00:01.302 **** 2025-09-06 00:34:11.091284 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:34:11.091340 | orchestrator | 2025-09-06 00:34:11.091352 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.091363 | orchestrator | Saturday 06 September 2025 00:34:06 +0000 (0:00:00.102) 0:00:01.404 **** 2025-09-06 00:34:11.091374 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:34:11.091385 | orchestrator | 2025-09-06 00:34:11.091396 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.091407 | orchestrator | Saturday 06 September 2025 00:34:07 +0000 (0:00:00.695) 0:00:02.100 **** 2025-09-06 00:34:11.091418 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:34:11.091429 | orchestrator | 2025-09-06 00:34:11.091459 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091471 | orchestrator | 2025-09-06 00:34:11.091482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091493 | orchestrator | Saturday 06 September 2025 00:34:07 +0000 (0:00:00.107) 0:00:02.207 **** 2025-09-06 00:34:11.091503 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:34:11.091514 | orchestrator | 2025-09-06 00:34:11.091528 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.091541 | orchestrator | Saturday 06 September 2025 00:34:07 +0000 (0:00:00.212) 0:00:02.419 **** 2025-09-06 00:34:11.091554 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:34:11.091567 | orchestrator | 2025-09-06 00:34:11.091580 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.091593 | orchestrator | Saturday 06 September 2025 00:34:08 +0000 (0:00:00.652) 0:00:03.072 **** 2025-09-06 00:34:11.091606 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:34:11.091619 | orchestrator | 2025-09-06 00:34:11.091631 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091644 | orchestrator | 2025-09-06 00:34:11.091657 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091671 | orchestrator | Saturday 06 September 2025 00:34:08 +0000 (0:00:00.116) 0:00:03.189 **** 2025-09-06 00:34:11.091684 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:34:11.091696 | orchestrator | 2025-09-06 00:34:11.091707 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.091718 | orchestrator | Saturday 06 September 2025 00:34:08 +0000 (0:00:00.108) 0:00:03.298 **** 2025-09-06 00:34:11.091729 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:34:11.091740 | orchestrator | 2025-09-06 00:34:11.091750 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.091761 | orchestrator | Saturday 06 September 2025 00:34:09 +0000 (0:00:00.665) 0:00:03.963 **** 2025-09-06 00:34:11.091772 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:34:11.091783 | orchestrator | 2025-09-06 00:34:11.091793 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091804 | orchestrator | 2025-09-06 00:34:11.091815 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091826 | orchestrator | Saturday 06 September 2025 00:34:09 +0000 (0:00:00.109) 0:00:04.072 **** 2025-09-06 00:34:11.091836 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:34:11.091847 | orchestrator | 2025-09-06 00:34:11.091858 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.091868 | orchestrator | Saturday 06 September 2025 00:34:09 +0000 (0:00:00.102) 0:00:04.175 **** 2025-09-06 00:34:11.091879 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:34:11.091890 | orchestrator | 2025-09-06 00:34:11.091901 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.091911 | orchestrator | Saturday 06 September 2025 00:34:09 +0000 (0:00:00.675) 0:00:04.851 **** 2025-09-06 00:34:11.091922 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:34:11.091933 | orchestrator | 2025-09-06 00:34:11.091944 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-06 00:34:11.091954 | orchestrator | 2025-09-06 00:34:11.091965 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-06 00:34:11.091976 | orchestrator | Saturday 06 September 2025 00:34:10 +0000 (0:00:00.105) 0:00:04.957 **** 2025-09-06 00:34:11.091986 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:34:11.091997 | orchestrator | 2025-09-06 00:34:11.092008 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-06 00:34:11.092018 | orchestrator | Saturday 06 September 2025 00:34:10 +0000 (0:00:00.100) 0:00:05.058 **** 2025-09-06 00:34:11.092029 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:34:11.092040 | orchestrator | 2025-09-06 00:34:11.092050 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-06 00:34:11.092069 | orchestrator | Saturday 06 September 2025 00:34:10 +0000 (0:00:00.659) 0:00:05.717 **** 2025-09-06 00:34:11.092096 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:34:11.092108 | orchestrator | 2025-09-06 00:34:11.092124 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:34:11.092136 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092148 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092159 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092170 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092181 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092191 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:34:11.092202 | orchestrator | 2025-09-06 00:34:11.092213 | orchestrator | 2025-09-06 00:34:11.092224 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:34:11.092235 | orchestrator | Saturday 06 September 2025 00:34:10 +0000 (0:00:00.031) 0:00:05.748 **** 2025-09-06 00:34:11.092246 | orchestrator | =============================================================================== 2025-09-06 00:34:11.092256 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.29s 2025-09-06 00:34:11.092271 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-09-06 00:34:11.092282 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-09-06 00:34:11.314247 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-06 00:34:23.371923 | orchestrator | 2025-09-06 00:34:23 | INFO  | Task b579333c-c536-4fbb-a378-ebfcb908fb07 (wait-for-connection) was prepared for execution. 2025-09-06 00:34:23.372036 | orchestrator | 2025-09-06 00:34:23 | INFO  | It takes a moment until task b579333c-c536-4fbb-a378-ebfcb908fb07 (wait-for-connection) has been started and output is visible here. 2025-09-06 00:34:39.102316 | orchestrator | 2025-09-06 00:34:39.102434 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-06 00:34:39.102450 | orchestrator | 2025-09-06 00:34:39.102460 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-06 00:34:39.102471 | orchestrator | Saturday 06 September 2025 00:34:27 +0000 (0:00:00.197) 0:00:00.197 **** 2025-09-06 00:34:39.102481 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:34:39.102491 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:34:39.102501 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:34:39.102511 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:34:39.102520 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:34:39.102530 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:34:39.102539 | orchestrator | 2025-09-06 00:34:39.102549 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:34:39.102560 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102571 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102581 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102616 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102627 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102636 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:34:39.102646 | orchestrator | 2025-09-06 00:34:39.102655 | orchestrator | 2025-09-06 00:34:39.102665 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:34:39.102674 | orchestrator | Saturday 06 September 2025 00:34:38 +0000 (0:00:11.520) 0:00:11.717 **** 2025-09-06 00:34:39.102684 | orchestrator | =============================================================================== 2025-09-06 00:34:39.102693 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2025-09-06 00:34:39.388782 | orchestrator | + osism apply hddtemp 2025-09-06 00:34:51.488853 | orchestrator | 2025-09-06 00:34:51 | INFO  | Task 5d0e8448-70e2-4947-9050-9b18d3be7bf2 (hddtemp) was prepared for execution. 2025-09-06 00:34:51.488959 | orchestrator | 2025-09-06 00:34:51 | INFO  | It takes a moment until task 5d0e8448-70e2-4947-9050-9b18d3be7bf2 (hddtemp) has been started and output is visible here. 2025-09-06 00:35:18.666996 | orchestrator | 2025-09-06 00:35:18.667111 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-06 00:35:18.667128 | orchestrator | 2025-09-06 00:35:18.667141 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-06 00:35:18.667153 | orchestrator | Saturday 06 September 2025 00:34:55 +0000 (0:00:00.266) 0:00:00.266 **** 2025-09-06 00:35:18.667164 | orchestrator | ok: [testbed-manager] 2025-09-06 00:35:18.667176 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:35:18.667186 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:35:18.667197 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:35:18.667208 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:35:18.667218 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:35:18.667229 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:35:18.667239 | orchestrator | 2025-09-06 00:35:18.667299 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-06 00:35:18.667313 | orchestrator | Saturday 06 September 2025 00:34:56 +0000 (0:00:00.693) 0:00:00.960 **** 2025-09-06 00:35:18.667347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:35:18.667363 | orchestrator | 2025-09-06 00:35:18.667374 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-06 00:35:18.667385 | orchestrator | Saturday 06 September 2025 00:34:57 +0000 (0:00:01.194) 0:00:02.155 **** 2025-09-06 00:35:18.667396 | orchestrator | ok: [testbed-manager] 2025-09-06 00:35:18.667406 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:35:18.667417 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:35:18.667428 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:35:18.667438 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:35:18.667449 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:35:18.667459 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:35:18.667470 | orchestrator | 2025-09-06 00:35:18.667481 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-06 00:35:18.667492 | orchestrator | Saturday 06 September 2025 00:34:59 +0000 (0:00:01.970) 0:00:04.126 **** 2025-09-06 00:35:18.667503 | orchestrator | changed: [testbed-manager] 2025-09-06 00:35:18.667515 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:35:18.667526 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:35:18.667538 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:35:18.667551 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:35:18.667586 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:35:18.667599 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:35:18.667611 | orchestrator | 2025-09-06 00:35:18.667624 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-06 00:35:18.667637 | orchestrator | Saturday 06 September 2025 00:35:00 +0000 (0:00:01.179) 0:00:05.306 **** 2025-09-06 00:35:18.667650 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:35:18.667662 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:35:18.667675 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:35:18.667688 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:35:18.667700 | orchestrator | ok: [testbed-manager] 2025-09-06 00:35:18.667712 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:35:18.667724 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:35:18.667735 | orchestrator | 2025-09-06 00:35:18.667749 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-06 00:35:18.667761 | orchestrator | Saturday 06 September 2025 00:35:01 +0000 (0:00:01.157) 0:00:06.463 **** 2025-09-06 00:35:18.667773 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:35:18.667786 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:35:18.667798 | orchestrator | changed: [testbed-manager] 2025-09-06 00:35:18.667812 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:35:18.667824 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:35:18.667837 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:35:18.667849 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:35:18.667862 | orchestrator | 2025-09-06 00:35:18.667874 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-06 00:35:18.667887 | orchestrator | Saturday 06 September 2025 00:35:02 +0000 (0:00:00.865) 0:00:07.328 **** 2025-09-06 00:35:18.667899 | orchestrator | changed: [testbed-manager] 2025-09-06 00:35:18.667909 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:35:18.667920 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:35:18.667930 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:35:18.667941 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:35:18.667951 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:35:18.667962 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:35:18.667972 | orchestrator | 2025-09-06 00:35:18.667983 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-06 00:35:18.667994 | orchestrator | Saturday 06 September 2025 00:35:15 +0000 (0:00:12.504) 0:00:19.832 **** 2025-09-06 00:35:18.668005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:35:18.668016 | orchestrator | 2025-09-06 00:35:18.668027 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-06 00:35:18.668037 | orchestrator | Saturday 06 September 2025 00:35:16 +0000 (0:00:01.347) 0:00:21.180 **** 2025-09-06 00:35:18.668048 | orchestrator | changed: [testbed-manager] 2025-09-06 00:35:18.668059 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:35:18.668069 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:35:18.668080 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:35:18.668090 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:35:18.668101 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:35:18.668111 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:35:18.668122 | orchestrator | 2025-09-06 00:35:18.668132 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:35:18.668143 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:35:18.668174 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668192 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668211 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668223 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668234 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668244 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:35:18.668275 | orchestrator | 2025-09-06 00:35:18.668286 | orchestrator | 2025-09-06 00:35:18.668297 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:35:18.668308 | orchestrator | Saturday 06 September 2025 00:35:18 +0000 (0:00:01.848) 0:00:23.028 **** 2025-09-06 00:35:18.668319 | orchestrator | =============================================================================== 2025-09-06 00:35:18.668329 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.50s 2025-09-06 00:35:18.668340 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2025-09-06 00:35:18.668351 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-09-06 00:35:18.668361 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-09-06 00:35:18.668372 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-09-06 00:35:18.668383 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-09-06 00:35:18.668393 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2025-09-06 00:35:18.668404 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2025-09-06 00:35:18.668415 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-09-06 00:35:18.996872 | orchestrator | ++ semver latest 7.1.1 2025-09-06 00:35:19.060025 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-06 00:35:19.060114 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-06 00:35:19.060127 | orchestrator | + sudo systemctl restart manager.service 2025-09-06 00:35:32.254766 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-06 00:35:32.254887 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-06 00:35:32.254904 | orchestrator | + local max_attempts=60 2025-09-06 00:35:32.254917 | orchestrator | + local name=ceph-ansible 2025-09-06 00:35:32.254928 | orchestrator | + local attempt_num=1 2025-09-06 00:35:32.254951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:32.294291 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:32.294382 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:32.294395 | orchestrator | + sleep 5 2025-09-06 00:35:37.300069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:37.355197 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:37.355310 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:37.355326 | orchestrator | + sleep 5 2025-09-06 00:35:42.358694 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:42.399134 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:42.399169 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:42.399181 | orchestrator | + sleep 5 2025-09-06 00:35:47.403887 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:47.436656 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:47.436695 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:47.436707 | orchestrator | + sleep 5 2025-09-06 00:35:52.441021 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:52.479328 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:52.479410 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:52.479434 | orchestrator | + sleep 5 2025-09-06 00:35:57.484769 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:35:57.521917 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:35:57.521975 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:35:57.521988 | orchestrator | + sleep 5 2025-09-06 00:36:02.526553 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:02.571475 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:02.571563 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:02.571579 | orchestrator | + sleep 5 2025-09-06 00:36:07.576045 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:07.614542 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:07.614633 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:07.614647 | orchestrator | + sleep 5 2025-09-06 00:36:12.619103 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:12.651933 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:12.651998 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:12.652012 | orchestrator | + sleep 5 2025-09-06 00:36:17.655254 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:17.692630 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:17.692700 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:17.692714 | orchestrator | + sleep 5 2025-09-06 00:36:22.698168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:22.737490 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:22.737543 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:22.737556 | orchestrator | + sleep 5 2025-09-06 00:36:27.742839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:27.784513 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:27.784576 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:27.784591 | orchestrator | + sleep 5 2025-09-06 00:36:32.789135 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:32.829138 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:32.829228 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-06 00:36:32.829243 | orchestrator | + sleep 5 2025-09-06 00:36:37.834032 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-06 00:36:37.876278 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:37.876372 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-06 00:36:37.876388 | orchestrator | + local max_attempts=60 2025-09-06 00:36:37.876401 | orchestrator | + local name=kolla-ansible 2025-09-06 00:36:37.876413 | orchestrator | + local attempt_num=1 2025-09-06 00:36:37.876784 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-06 00:36:37.918494 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:37.918591 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-06 00:36:37.918606 | orchestrator | + local max_attempts=60 2025-09-06 00:36:37.918619 | orchestrator | + local name=osism-ansible 2025-09-06 00:36:37.918631 | orchestrator | + local attempt_num=1 2025-09-06 00:36:37.919485 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-06 00:36:37.957900 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-06 00:36:37.957931 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-06 00:36:37.957944 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-06 00:36:38.128541 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-06 00:36:38.282279 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-06 00:36:38.410578 | orchestrator | ARA in osism-ansible already disabled. 2025-09-06 00:36:38.557493 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-06 00:36:38.557825 | orchestrator | + osism apply gather-facts 2025-09-06 00:36:50.576265 | orchestrator | 2025-09-06 00:36:50 | INFO  | Task fa18acf4-7be8-49ed-aa94-2db812331553 (gather-facts) was prepared for execution. 2025-09-06 00:36:50.576378 | orchestrator | 2025-09-06 00:36:50 | INFO  | It takes a moment until task fa18acf4-7be8-49ed-aa94-2db812331553 (gather-facts) has been started and output is visible here. 2025-09-06 00:37:02.901886 | orchestrator | 2025-09-06 00:37:02.901978 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-06 00:37:02.902069 | orchestrator | 2025-09-06 00:37:02.902086 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:37:02.902098 | orchestrator | Saturday 06 September 2025 00:36:54 +0000 (0:00:00.200) 0:00:00.200 **** 2025-09-06 00:37:02.902109 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:37:02.902121 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:37:02.902131 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:37:02.902142 | orchestrator | ok: [testbed-manager] 2025-09-06 00:37:02.902153 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:37:02.902205 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:37:02.902218 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:37:02.902229 | orchestrator | 2025-09-06 00:37:02.902240 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-06 00:37:02.902251 | orchestrator | 2025-09-06 00:37:02.902262 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-06 00:37:02.902273 | orchestrator | Saturday 06 September 2025 00:37:02 +0000 (0:00:08.093) 0:00:08.294 **** 2025-09-06 00:37:02.902283 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:37:02.902295 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:37:02.902305 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:37:02.902316 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:37:02.902327 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:37:02.902337 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:37:02.902348 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:37:02.902358 | orchestrator | 2025-09-06 00:37:02.902369 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:37:02.902380 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902392 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902403 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902414 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902424 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902435 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902446 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:37:02.902460 | orchestrator | 2025-09-06 00:37:02.902473 | orchestrator | 2025-09-06 00:37:02.902487 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:37:02.902500 | orchestrator | Saturday 06 September 2025 00:37:02 +0000 (0:00:00.435) 0:00:08.730 **** 2025-09-06 00:37:02.902513 | orchestrator | =============================================================================== 2025-09-06 00:37:02.902526 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.09s 2025-09-06 00:37:02.902538 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-09-06 00:37:03.113203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-06 00:37:03.129402 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-06 00:37:03.140058 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-06 00:37:03.155283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-06 00:37:03.166512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-06 00:37:03.176102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-06 00:37:03.186481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-06 00:37:03.197344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-06 00:37:03.214139 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-06 00:37:03.226379 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-06 00:37:03.237348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-06 00:37:03.247022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-06 00:37:03.256337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-06 00:37:03.263761 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-06 00:37:03.271679 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-06 00:37:03.279627 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-06 00:37:03.287614 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-06 00:37:03.297850 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-06 00:37:03.311325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-06 00:37:03.322946 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-06 00:37:03.333326 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-06 00:37:03.830036 | orchestrator | ok: Runtime: 0:23:02.286206 2025-09-06 00:37:03.930037 | 2025-09-06 00:37:03.930168 | TASK [Deploy services] 2025-09-06 00:37:04.461772 | orchestrator | skipping: Conditional result was False 2025-09-06 00:37:04.481190 | 2025-09-06 00:37:04.481387 | TASK [Deploy in a nutshell] 2025-09-06 00:37:05.127574 | orchestrator | + set -e 2025-09-06 00:37:05.127713 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-06 00:37:05.127731 | orchestrator | ++ export INTERACTIVE=false 2025-09-06 00:37:05.127746 | orchestrator | ++ INTERACTIVE=false 2025-09-06 00:37:05.129221 | orchestrator | 2025-09-06 00:37:05.129238 | orchestrator | # PULL IMAGES 2025-09-06 00:37:05.129249 | orchestrator | 2025-09-06 00:37:05.129281 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-06 00:37:05.129298 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-06 00:37:05.129309 | orchestrator | + source /opt/manager-vars.sh 2025-09-06 00:37:05.129318 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-06 00:37:05.129331 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-06 00:37:05.129339 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-06 00:37:05.129352 | orchestrator | ++ CEPH_VERSION=reef 2025-09-06 00:37:05.129361 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-06 00:37:05.129374 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-06 00:37:05.129382 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-06 00:37:05.129392 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-06 00:37:05.129401 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-06 00:37:05.129409 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-06 00:37:05.129417 | orchestrator | ++ export ARA=false 2025-09-06 00:37:05.129426 | orchestrator | ++ ARA=false 2025-09-06 00:37:05.129434 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-06 00:37:05.129442 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-06 00:37:05.129450 | orchestrator | ++ export TEMPEST=true 2025-09-06 00:37:05.129458 | orchestrator | ++ TEMPEST=true 2025-09-06 00:37:05.129465 | orchestrator | ++ export IS_ZUUL=true 2025-09-06 00:37:05.129473 | orchestrator | ++ IS_ZUUL=true 2025-09-06 00:37:05.129481 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:37:05.129489 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.59 2025-09-06 00:37:05.129497 | orchestrator | ++ export EXTERNAL_API=false 2025-09-06 00:37:05.129505 | orchestrator | ++ EXTERNAL_API=false 2025-09-06 00:37:05.129513 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-06 00:37:05.129522 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-06 00:37:05.129530 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-06 00:37:05.129538 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-06 00:37:05.129546 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-06 00:37:05.129554 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-06 00:37:05.129562 | orchestrator | + echo 2025-09-06 00:37:05.129570 | orchestrator | + echo '# PULL IMAGES' 2025-09-06 00:37:05.129578 | orchestrator | + echo 2025-09-06 00:37:05.129586 | orchestrator | ++ semver latest 7.0.0 2025-09-06 00:37:05.161565 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-06 00:37:05.161603 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-06 00:37:05.161612 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-06 00:37:06.761980 | orchestrator | 2025-09-06 00:37:06 | INFO  | Trying to run play pull-images in environment custom 2025-09-06 00:37:16.900652 | orchestrator | 2025-09-06 00:37:16 | INFO  | Task 6e34438c-6cad-4667-af9a-f05c8f52a99e (pull-images) was prepared for execution. 2025-09-06 00:37:16.900752 | orchestrator | 2025-09-06 00:37:16 | INFO  | Task 6e34438c-6cad-4667-af9a-f05c8f52a99e is running in background. No more output. Check ARA for logs. 2025-09-06 00:37:19.161008 | orchestrator | 2025-09-06 00:37:19 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-06 00:37:29.354110 | orchestrator | 2025-09-06 00:37:29 | INFO  | Task 5d75a1d6-c5cc-4e4c-a33b-3662c8568833 (wipe-partitions) was prepared for execution. 2025-09-06 00:37:29.354257 | orchestrator | 2025-09-06 00:37:29 | INFO  | It takes a moment until task 5d75a1d6-c5cc-4e4c-a33b-3662c8568833 (wipe-partitions) has been started and output is visible here. 2025-09-06 00:37:42.233701 | orchestrator | 2025-09-06 00:37:42.233825 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-06 00:37:42.233843 | orchestrator | 2025-09-06 00:37:42.233855 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-06 00:37:42.233875 | orchestrator | Saturday 06 September 2025 00:37:33 +0000 (0:00:00.130) 0:00:00.130 **** 2025-09-06 00:37:42.233886 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:37:42.233898 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:37:42.233909 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:37:42.233921 | orchestrator | 2025-09-06 00:37:42.233932 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-06 00:37:42.233969 | orchestrator | Saturday 06 September 2025 00:37:33 +0000 (0:00:00.592) 0:00:00.723 **** 2025-09-06 00:37:42.233980 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:37:42.233991 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:37:42.234006 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:37:42.234070 | orchestrator | 2025-09-06 00:37:42.234083 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-06 00:37:42.234095 | orchestrator | Saturday 06 September 2025 00:37:34 +0000 (0:00:00.260) 0:00:00.983 **** 2025-09-06 00:37:42.234106 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:37:42.234152 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:37:42.234165 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:37:42.234176 | orchestrator | 2025-09-06 00:37:42.234187 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-06 00:37:42.234198 | orchestrator | Saturday 06 September 2025 00:37:34 +0000 (0:00:00.742) 0:00:01.726 **** 2025-09-06 00:37:42.234209 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:37:42.234220 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:37:42.234230 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:37:42.234241 | orchestrator | 2025-09-06 00:37:42.234251 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-06 00:37:42.234262 | orchestrator | Saturday 06 September 2025 00:37:35 +0000 (0:00:00.247) 0:00:01.974 **** 2025-09-06 00:37:42.234273 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-06 00:37:42.234288 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-06 00:37:42.234299 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-06 00:37:42.234309 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-06 00:37:42.234320 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-06 00:37:42.234331 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-06 00:37:42.234341 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-06 00:37:42.234352 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-06 00:37:42.234362 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-06 00:37:42.234373 | orchestrator | 2025-09-06 00:37:42.234384 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-06 00:37:42.234395 | orchestrator | Saturday 06 September 2025 00:37:37 +0000 (0:00:02.193) 0:00:04.167 **** 2025-09-06 00:37:42.234406 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-06 00:37:42.234417 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-06 00:37:42.234428 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-06 00:37:42.234439 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-06 00:37:42.234449 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-06 00:37:42.234460 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-06 00:37:42.234470 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-06 00:37:42.234481 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-06 00:37:42.234491 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-06 00:37:42.234502 | orchestrator | 2025-09-06 00:37:42.234513 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-06 00:37:42.234524 | orchestrator | Saturday 06 September 2025 00:37:38 +0000 (0:00:01.306) 0:00:05.474 **** 2025-09-06 00:37:42.234534 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-06 00:37:42.234545 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-06 00:37:42.234556 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-06 00:37:42.234566 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-06 00:37:42.234577 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-06 00:37:42.234588 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-06 00:37:42.234598 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-06 00:37:42.234620 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-06 00:37:42.234638 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-06 00:37:42.234649 | orchestrator | 2025-09-06 00:37:42.234660 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-06 00:37:42.234670 | orchestrator | Saturday 06 September 2025 00:37:40 +0000 (0:00:02.131) 0:00:07.606 **** 2025-09-06 00:37:42.234681 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:37:42.234692 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:37:42.234702 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:37:42.234713 | orchestrator | 2025-09-06 00:37:42.234724 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-06 00:37:42.234734 | orchestrator | Saturday 06 September 2025 00:37:41 +0000 (0:00:00.581) 0:00:08.187 **** 2025-09-06 00:37:42.234745 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:37:42.234756 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:37:42.234766 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:37:42.234777 | orchestrator | 2025-09-06 00:37:42.234787 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:37:42.234801 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:37:42.234813 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:37:42.234843 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:37:42.234854 | orchestrator | 2025-09-06 00:37:42.234865 | orchestrator | 2025-09-06 00:37:42.234876 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:37:42.234887 | orchestrator | Saturday 06 September 2025 00:37:41 +0000 (0:00:00.666) 0:00:08.853 **** 2025-09-06 00:37:42.234898 | orchestrator | =============================================================================== 2025-09-06 00:37:42.234908 | orchestrator | Check device availability ----------------------------------------------- 2.19s 2025-09-06 00:37:42.234919 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-09-06 00:37:42.234930 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-09-06 00:37:42.234941 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-09-06 00:37:42.234952 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-09-06 00:37:42.234962 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-09-06 00:37:42.234973 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-09-06 00:37:42.234984 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-06 00:37:42.234994 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-06 00:37:54.422945 | orchestrator | 2025-09-06 00:37:54 | INFO  | Task 547a7e47-3bdd-4ee7-a455-8d65ccee0697 (facts) was prepared for execution. 2025-09-06 00:37:54.423067 | orchestrator | 2025-09-06 00:37:54 | INFO  | It takes a moment until task 547a7e47-3bdd-4ee7-a455-8d65ccee0697 (facts) has been started and output is visible here. 2025-09-06 00:38:06.160682 | orchestrator | 2025-09-06 00:38:06.160808 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-06 00:38:06.160826 | orchestrator | 2025-09-06 00:38:06.160839 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-06 00:38:06.160851 | orchestrator | Saturday 06 September 2025 00:37:58 +0000 (0:00:00.261) 0:00:00.261 **** 2025-09-06 00:38:06.160862 | orchestrator | ok: [testbed-manager] 2025-09-06 00:38:06.160874 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:38:06.160886 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:38:06.160922 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:38:06.160934 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:06.160945 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:06.160956 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:06.160967 | orchestrator | 2025-09-06 00:38:06.160978 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-06 00:38:06.160989 | orchestrator | Saturday 06 September 2025 00:37:59 +0000 (0:00:01.031) 0:00:01.292 **** 2025-09-06 00:38:06.161000 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:38:06.161012 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:38:06.161023 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:38:06.161034 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:38:06.161045 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:06.161056 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:06.161067 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:06.161077 | orchestrator | 2025-09-06 00:38:06.161137 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-06 00:38:06.161150 | orchestrator | 2025-09-06 00:38:06.161179 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:38:06.161191 | orchestrator | Saturday 06 September 2025 00:38:00 +0000 (0:00:01.198) 0:00:02.490 **** 2025-09-06 00:38:06.161201 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:38:06.161212 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:38:06.161226 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:38:06.161239 | orchestrator | ok: [testbed-manager] 2025-09-06 00:38:06.161252 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:06.161264 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:06.161277 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:06.161289 | orchestrator | 2025-09-06 00:38:06.161302 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-06 00:38:06.161315 | orchestrator | 2025-09-06 00:38:06.161328 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-06 00:38:06.161342 | orchestrator | Saturday 06 September 2025 00:38:05 +0000 (0:00:04.561) 0:00:07.052 **** 2025-09-06 00:38:06.161355 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:38:06.161367 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:38:06.161380 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:38:06.161394 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:38:06.161406 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:06.161420 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:06.161432 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:06.161445 | orchestrator | 2025-09-06 00:38:06.161458 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:38:06.161470 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161486 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161500 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161512 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161526 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161538 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161551 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:38:06.161565 | orchestrator | 2025-09-06 00:38:06.161585 | orchestrator | 2025-09-06 00:38:06.161596 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:38:06.161607 | orchestrator | Saturday 06 September 2025 00:38:05 +0000 (0:00:00.676) 0:00:07.728 **** 2025-09-06 00:38:06.161618 | orchestrator | =============================================================================== 2025-09-06 00:38:06.161629 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.56s 2025-09-06 00:38:06.161639 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-09-06 00:38:06.161650 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.03s 2025-09-06 00:38:06.161662 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2025-09-06 00:38:08.499111 | orchestrator | 2025-09-06 00:38:08 | INFO  | Task 304bf766-63f1-4f5d-8415-35eb6c088c19 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-06 00:38:08.499202 | orchestrator | 2025-09-06 00:38:08 | INFO  | It takes a moment until task 304bf766-63f1-4f5d-8415-35eb6c088c19 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-06 00:38:19.715197 | orchestrator | 2025-09-06 00:38:19.715314 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-06 00:38:19.715332 | orchestrator | 2025-09-06 00:38:19.715345 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:38:19.715357 | orchestrator | Saturday 06 September 2025 00:38:12 +0000 (0:00:00.310) 0:00:00.310 **** 2025-09-06 00:38:19.715368 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:19.715379 | orchestrator | 2025-09-06 00:38:19.715390 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:38:19.715401 | orchestrator | Saturday 06 September 2025 00:38:12 +0000 (0:00:00.254) 0:00:00.564 **** 2025-09-06 00:38:19.715413 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:19.715424 | orchestrator | 2025-09-06 00:38:19.715435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715446 | orchestrator | Saturday 06 September 2025 00:38:13 +0000 (0:00:00.217) 0:00:00.782 **** 2025-09-06 00:38:19.715457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-06 00:38:19.715468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-06 00:38:19.715480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-06 00:38:19.715502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-06 00:38:19.715514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-06 00:38:19.715525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-06 00:38:19.715535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-06 00:38:19.715546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-06 00:38:19.715558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-06 00:38:19.715569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-06 00:38:19.715579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-06 00:38:19.715590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-06 00:38:19.715600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-06 00:38:19.715611 | orchestrator | 2025-09-06 00:38:19.715622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715633 | orchestrator | Saturday 06 September 2025 00:38:13 +0000 (0:00:00.339) 0:00:01.122 **** 2025-09-06 00:38:19.715644 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715679 | orchestrator | 2025-09-06 00:38:19.715693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715706 | orchestrator | Saturday 06 September 2025 00:38:13 +0000 (0:00:00.445) 0:00:01.568 **** 2025-09-06 00:38:19.715718 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715730 | orchestrator | 2025-09-06 00:38:19.715743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715755 | orchestrator | Saturday 06 September 2025 00:38:13 +0000 (0:00:00.197) 0:00:01.765 **** 2025-09-06 00:38:19.715767 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715780 | orchestrator | 2025-09-06 00:38:19.715792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715807 | orchestrator | Saturday 06 September 2025 00:38:14 +0000 (0:00:00.191) 0:00:01.957 **** 2025-09-06 00:38:19.715819 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715836 | orchestrator | 2025-09-06 00:38:19.715849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715862 | orchestrator | Saturday 06 September 2025 00:38:14 +0000 (0:00:00.192) 0:00:02.149 **** 2025-09-06 00:38:19.715874 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715886 | orchestrator | 2025-09-06 00:38:19.715899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715912 | orchestrator | Saturday 06 September 2025 00:38:14 +0000 (0:00:00.190) 0:00:02.340 **** 2025-09-06 00:38:19.715924 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715938 | orchestrator | 2025-09-06 00:38:19.715950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.715963 | orchestrator | Saturday 06 September 2025 00:38:14 +0000 (0:00:00.201) 0:00:02.542 **** 2025-09-06 00:38:19.715975 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.715988 | orchestrator | 2025-09-06 00:38:19.716001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716013 | orchestrator | Saturday 06 September 2025 00:38:14 +0000 (0:00:00.195) 0:00:02.738 **** 2025-09-06 00:38:19.716026 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716039 | orchestrator | 2025-09-06 00:38:19.716050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716061 | orchestrator | Saturday 06 September 2025 00:38:15 +0000 (0:00:00.190) 0:00:02.928 **** 2025-09-06 00:38:19.716092 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20) 2025-09-06 00:38:19.716106 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20) 2025-09-06 00:38:19.716117 | orchestrator | 2025-09-06 00:38:19.716128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716138 | orchestrator | Saturday 06 September 2025 00:38:15 +0000 (0:00:00.402) 0:00:03.331 **** 2025-09-06 00:38:19.716166 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8) 2025-09-06 00:38:19.716178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8) 2025-09-06 00:38:19.716189 | orchestrator | 2025-09-06 00:38:19.716199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716210 | orchestrator | Saturday 06 September 2025 00:38:15 +0000 (0:00:00.388) 0:00:03.719 **** 2025-09-06 00:38:19.716226 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff) 2025-09-06 00:38:19.716237 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff) 2025-09-06 00:38:19.716248 | orchestrator | 2025-09-06 00:38:19.716259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716270 | orchestrator | Saturday 06 September 2025 00:38:16 +0000 (0:00:00.569) 0:00:04.289 **** 2025-09-06 00:38:19.716280 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5) 2025-09-06 00:38:19.716299 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5) 2025-09-06 00:38:19.716309 | orchestrator | 2025-09-06 00:38:19.716320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:19.716331 | orchestrator | Saturday 06 September 2025 00:38:17 +0000 (0:00:00.599) 0:00:04.889 **** 2025-09-06 00:38:19.716342 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:38:19.716352 | orchestrator | 2025-09-06 00:38:19.716363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716374 | orchestrator | Saturday 06 September 2025 00:38:17 +0000 (0:00:00.652) 0:00:05.541 **** 2025-09-06 00:38:19.716384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-06 00:38:19.716395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-06 00:38:19.716406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-06 00:38:19.716416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-06 00:38:19.716427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-06 00:38:19.716437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-06 00:38:19.716448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-06 00:38:19.716458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-06 00:38:19.716469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-06 00:38:19.716479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-06 00:38:19.716490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-06 00:38:19.716501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-06 00:38:19.716511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-06 00:38:19.716522 | orchestrator | 2025-09-06 00:38:19.716533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716544 | orchestrator | Saturday 06 September 2025 00:38:18 +0000 (0:00:00.356) 0:00:05.898 **** 2025-09-06 00:38:19.716554 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716565 | orchestrator | 2025-09-06 00:38:19.716576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716586 | orchestrator | Saturday 06 September 2025 00:38:18 +0000 (0:00:00.193) 0:00:06.092 **** 2025-09-06 00:38:19.716597 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716607 | orchestrator | 2025-09-06 00:38:19.716618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716628 | orchestrator | Saturday 06 September 2025 00:38:18 +0000 (0:00:00.192) 0:00:06.285 **** 2025-09-06 00:38:19.716639 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716650 | orchestrator | 2025-09-06 00:38:19.716660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716671 | orchestrator | Saturday 06 September 2025 00:38:18 +0000 (0:00:00.201) 0:00:06.486 **** 2025-09-06 00:38:19.716681 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716692 | orchestrator | 2025-09-06 00:38:19.716703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716713 | orchestrator | Saturday 06 September 2025 00:38:18 +0000 (0:00:00.206) 0:00:06.693 **** 2025-09-06 00:38:19.716724 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716735 | orchestrator | 2025-09-06 00:38:19.716745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716761 | orchestrator | Saturday 06 September 2025 00:38:19 +0000 (0:00:00.199) 0:00:06.892 **** 2025-09-06 00:38:19.716772 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716783 | orchestrator | 2025-09-06 00:38:19.716794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716804 | orchestrator | Saturday 06 September 2025 00:38:19 +0000 (0:00:00.189) 0:00:07.082 **** 2025-09-06 00:38:19.716815 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:19.716825 | orchestrator | 2025-09-06 00:38:19.716836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:19.716847 | orchestrator | Saturday 06 September 2025 00:38:19 +0000 (0:00:00.185) 0:00:07.268 **** 2025-09-06 00:38:19.716864 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.798994 | orchestrator | 2025-09-06 00:38:26.799130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:26.799148 | orchestrator | Saturday 06 September 2025 00:38:19 +0000 (0:00:00.214) 0:00:07.482 **** 2025-09-06 00:38:26.799159 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-06 00:38:26.799170 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-06 00:38:26.799180 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-06 00:38:26.799190 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-06 00:38:26.799200 | orchestrator | 2025-09-06 00:38:26.799210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:26.799219 | orchestrator | Saturday 06 September 2025 00:38:20 +0000 (0:00:00.890) 0:00:08.372 **** 2025-09-06 00:38:26.799247 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799258 | orchestrator | 2025-09-06 00:38:26.799268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:26.799278 | orchestrator | Saturday 06 September 2025 00:38:20 +0000 (0:00:00.197) 0:00:08.570 **** 2025-09-06 00:38:26.799288 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799298 | orchestrator | 2025-09-06 00:38:26.799307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:26.799317 | orchestrator | Saturday 06 September 2025 00:38:20 +0000 (0:00:00.180) 0:00:08.750 **** 2025-09-06 00:38:26.799327 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799337 | orchestrator | 2025-09-06 00:38:26.799347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:26.799357 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.194) 0:00:08.945 **** 2025-09-06 00:38:26.799367 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799377 | orchestrator | 2025-09-06 00:38:26.799386 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-06 00:38:26.799396 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.192) 0:00:09.138 **** 2025-09-06 00:38:26.799406 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-06 00:38:26.799416 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-06 00:38:26.799425 | orchestrator | 2025-09-06 00:38:26.799435 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-06 00:38:26.799445 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.168) 0:00:09.307 **** 2025-09-06 00:38:26.799455 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799465 | orchestrator | 2025-09-06 00:38:26.799474 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-06 00:38:26.799484 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.132) 0:00:09.439 **** 2025-09-06 00:38:26.799494 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799503 | orchestrator | 2025-09-06 00:38:26.799513 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-06 00:38:26.799523 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.138) 0:00:09.577 **** 2025-09-06 00:38:26.799535 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799566 | orchestrator | 2025-09-06 00:38:26.799578 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-06 00:38:26.799589 | orchestrator | Saturday 06 September 2025 00:38:21 +0000 (0:00:00.142) 0:00:09.720 **** 2025-09-06 00:38:26.799601 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:26.799613 | orchestrator | 2025-09-06 00:38:26.799624 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-06 00:38:26.799635 | orchestrator | Saturday 06 September 2025 00:38:22 +0000 (0:00:00.137) 0:00:09.858 **** 2025-09-06 00:38:26.799646 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}}) 2025-09-06 00:38:26.799658 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e6b4ea58-4fde-56e5-979f-346e927a82c3'}}) 2025-09-06 00:38:26.799670 | orchestrator | 2025-09-06 00:38:26.799682 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-06 00:38:26.799693 | orchestrator | Saturday 06 September 2025 00:38:22 +0000 (0:00:00.169) 0:00:10.028 **** 2025-09-06 00:38:26.799705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}})  2025-09-06 00:38:26.799726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e6b4ea58-4fde-56e5-979f-346e927a82c3'}})  2025-09-06 00:38:26.799739 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799751 | orchestrator | 2025-09-06 00:38:26.799762 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-06 00:38:26.799774 | orchestrator | Saturday 06 September 2025 00:38:22 +0000 (0:00:00.131) 0:00:10.160 **** 2025-09-06 00:38:26.799785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}})  2025-09-06 00:38:26.799797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e6b4ea58-4fde-56e5-979f-346e927a82c3'}})  2025-09-06 00:38:26.799809 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799820 | orchestrator | 2025-09-06 00:38:26.799832 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-06 00:38:26.799844 | orchestrator | Saturday 06 September 2025 00:38:22 +0000 (0:00:00.335) 0:00:10.496 **** 2025-09-06 00:38:26.799855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}})  2025-09-06 00:38:26.799866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e6b4ea58-4fde-56e5-979f-346e927a82c3'}})  2025-09-06 00:38:26.799878 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.799888 | orchestrator | 2025-09-06 00:38:26.799915 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-06 00:38:26.799926 | orchestrator | Saturday 06 September 2025 00:38:22 +0000 (0:00:00.145) 0:00:10.641 **** 2025-09-06 00:38:26.799935 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:26.799945 | orchestrator | 2025-09-06 00:38:26.799955 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-06 00:38:26.799964 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.135) 0:00:10.776 **** 2025-09-06 00:38:26.799974 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:38:26.799984 | orchestrator | 2025-09-06 00:38:26.799993 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-06 00:38:26.800004 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.135) 0:00:10.912 **** 2025-09-06 00:38:26.800013 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800023 | orchestrator | 2025-09-06 00:38:26.800033 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-06 00:38:26.800042 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.131) 0:00:11.044 **** 2025-09-06 00:38:26.800052 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800084 | orchestrator | 2025-09-06 00:38:26.800103 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-06 00:38:26.800113 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.129) 0:00:11.173 **** 2025-09-06 00:38:26.800123 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800133 | orchestrator | 2025-09-06 00:38:26.800142 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-06 00:38:26.800152 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.132) 0:00:11.305 **** 2025-09-06 00:38:26.800162 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:38:26.800171 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:26.800181 | orchestrator |  "sdb": { 2025-09-06 00:38:26.800192 | orchestrator |  "osd_lvm_uuid": "6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567" 2025-09-06 00:38:26.800202 | orchestrator |  }, 2025-09-06 00:38:26.800212 | orchestrator |  "sdc": { 2025-09-06 00:38:26.800222 | orchestrator |  "osd_lvm_uuid": "e6b4ea58-4fde-56e5-979f-346e927a82c3" 2025-09-06 00:38:26.800232 | orchestrator |  } 2025-09-06 00:38:26.800242 | orchestrator |  } 2025-09-06 00:38:26.800251 | orchestrator | } 2025-09-06 00:38:26.800261 | orchestrator | 2025-09-06 00:38:26.800271 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-06 00:38:26.800281 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.132) 0:00:11.438 **** 2025-09-06 00:38:26.800290 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800300 | orchestrator | 2025-09-06 00:38:26.800309 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-06 00:38:26.800319 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.134) 0:00:11.572 **** 2025-09-06 00:38:26.800333 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800344 | orchestrator | 2025-09-06 00:38:26.800353 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-06 00:38:26.800363 | orchestrator | Saturday 06 September 2025 00:38:23 +0000 (0:00:00.130) 0:00:11.702 **** 2025-09-06 00:38:26.800372 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:38:26.800382 | orchestrator | 2025-09-06 00:38:26.800391 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-06 00:38:26.800401 | orchestrator | Saturday 06 September 2025 00:38:24 +0000 (0:00:00.133) 0:00:11.835 **** 2025-09-06 00:38:26.800411 | orchestrator | changed: [testbed-node-3] => { 2025-09-06 00:38:26.800420 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-06 00:38:26.800430 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:26.800439 | orchestrator |  "sdb": { 2025-09-06 00:38:26.800449 | orchestrator |  "osd_lvm_uuid": "6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567" 2025-09-06 00:38:26.800458 | orchestrator |  }, 2025-09-06 00:38:26.800468 | orchestrator |  "sdc": { 2025-09-06 00:38:26.800478 | orchestrator |  "osd_lvm_uuid": "e6b4ea58-4fde-56e5-979f-346e927a82c3" 2025-09-06 00:38:26.800487 | orchestrator |  } 2025-09-06 00:38:26.800497 | orchestrator |  }, 2025-09-06 00:38:26.800506 | orchestrator |  "lvm_volumes": [ 2025-09-06 00:38:26.800516 | orchestrator |  { 2025-09-06 00:38:26.800526 | orchestrator |  "data": "osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567", 2025-09-06 00:38:26.800536 | orchestrator |  "data_vg": "ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567" 2025-09-06 00:38:26.800545 | orchestrator |  }, 2025-09-06 00:38:26.800555 | orchestrator |  { 2025-09-06 00:38:26.800565 | orchestrator |  "data": "osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3", 2025-09-06 00:38:26.800574 | orchestrator |  "data_vg": "ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3" 2025-09-06 00:38:26.800584 | orchestrator |  } 2025-09-06 00:38:26.800593 | orchestrator |  ] 2025-09-06 00:38:26.800603 | orchestrator |  } 2025-09-06 00:38:26.800612 | orchestrator | } 2025-09-06 00:38:26.800622 | orchestrator | 2025-09-06 00:38:26.800631 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-06 00:38:26.800647 | orchestrator | Saturday 06 September 2025 00:38:24 +0000 (0:00:00.202) 0:00:12.038 **** 2025-09-06 00:38:26.800657 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:26.800667 | orchestrator | 2025-09-06 00:38:26.800676 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-06 00:38:26.800686 | orchestrator | 2025-09-06 00:38:26.800696 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:38:26.800705 | orchestrator | Saturday 06 September 2025 00:38:26 +0000 (0:00:02.060) 0:00:14.099 **** 2025-09-06 00:38:26.800715 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:26.800725 | orchestrator | 2025-09-06 00:38:26.800734 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:38:26.800744 | orchestrator | Saturday 06 September 2025 00:38:26 +0000 (0:00:00.244) 0:00:14.343 **** 2025-09-06 00:38:26.800753 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:26.800763 | orchestrator | 2025-09-06 00:38:26.800773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:26.800789 | orchestrator | Saturday 06 September 2025 00:38:26 +0000 (0:00:00.226) 0:00:14.570 **** 2025-09-06 00:38:34.268396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-06 00:38:34.268518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-06 00:38:34.268535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-06 00:38:34.268547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-06 00:38:34.268558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-06 00:38:34.268568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-06 00:38:34.268579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-06 00:38:34.268590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-06 00:38:34.268601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-06 00:38:34.268612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-06 00:38:34.268642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-06 00:38:34.268654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-06 00:38:34.268664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-06 00:38:34.268681 | orchestrator | 2025-09-06 00:38:34.268693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268705 | orchestrator | Saturday 06 September 2025 00:38:27 +0000 (0:00:00.352) 0:00:14.922 **** 2025-09-06 00:38:34.268717 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268728 | orchestrator | 2025-09-06 00:38:34.268739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268750 | orchestrator | Saturday 06 September 2025 00:38:27 +0000 (0:00:00.195) 0:00:15.117 **** 2025-09-06 00:38:34.268761 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268772 | orchestrator | 2025-09-06 00:38:34.268783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268793 | orchestrator | Saturday 06 September 2025 00:38:27 +0000 (0:00:00.185) 0:00:15.303 **** 2025-09-06 00:38:34.268804 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268815 | orchestrator | 2025-09-06 00:38:34.268826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268836 | orchestrator | Saturday 06 September 2025 00:38:27 +0000 (0:00:00.191) 0:00:15.494 **** 2025-09-06 00:38:34.268847 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268884 | orchestrator | 2025-09-06 00:38:34.268896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268907 | orchestrator | Saturday 06 September 2025 00:38:27 +0000 (0:00:00.204) 0:00:15.699 **** 2025-09-06 00:38:34.268921 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268934 | orchestrator | 2025-09-06 00:38:34.268946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.268959 | orchestrator | Saturday 06 September 2025 00:38:28 +0000 (0:00:00.528) 0:00:16.228 **** 2025-09-06 00:38:34.268972 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.268985 | orchestrator | 2025-09-06 00:38:34.268998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269010 | orchestrator | Saturday 06 September 2025 00:38:28 +0000 (0:00:00.184) 0:00:16.412 **** 2025-09-06 00:38:34.269020 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269031 | orchestrator | 2025-09-06 00:38:34.269042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269075 | orchestrator | Saturday 06 September 2025 00:38:28 +0000 (0:00:00.213) 0:00:16.626 **** 2025-09-06 00:38:34.269087 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269098 | orchestrator | 2025-09-06 00:38:34.269109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269119 | orchestrator | Saturday 06 September 2025 00:38:29 +0000 (0:00:00.206) 0:00:16.832 **** 2025-09-06 00:38:34.269131 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1) 2025-09-06 00:38:34.269143 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1) 2025-09-06 00:38:34.269154 | orchestrator | 2025-09-06 00:38:34.269165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269176 | orchestrator | Saturday 06 September 2025 00:38:29 +0000 (0:00:00.397) 0:00:17.230 **** 2025-09-06 00:38:34.269186 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74) 2025-09-06 00:38:34.269197 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74) 2025-09-06 00:38:34.269208 | orchestrator | 2025-09-06 00:38:34.269219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269229 | orchestrator | Saturday 06 September 2025 00:38:29 +0000 (0:00:00.412) 0:00:17.643 **** 2025-09-06 00:38:34.269240 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba) 2025-09-06 00:38:34.269251 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba) 2025-09-06 00:38:34.269262 | orchestrator | 2025-09-06 00:38:34.269273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269284 | orchestrator | Saturday 06 September 2025 00:38:30 +0000 (0:00:00.420) 0:00:18.063 **** 2025-09-06 00:38:34.269310 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7) 2025-09-06 00:38:34.269322 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7) 2025-09-06 00:38:34.269333 | orchestrator | 2025-09-06 00:38:34.269344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:34.269355 | orchestrator | Saturday 06 September 2025 00:38:30 +0000 (0:00:00.413) 0:00:18.477 **** 2025-09-06 00:38:34.269366 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:38:34.269377 | orchestrator | 2025-09-06 00:38:34.269388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269405 | orchestrator | Saturday 06 September 2025 00:38:31 +0000 (0:00:00.319) 0:00:18.797 **** 2025-09-06 00:38:34.269416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-06 00:38:34.269436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-06 00:38:34.269447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-06 00:38:34.269458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-06 00:38:34.269468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-06 00:38:34.269479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-06 00:38:34.269490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-06 00:38:34.269501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-06 00:38:34.269511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-06 00:38:34.269522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-06 00:38:34.269533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-06 00:38:34.269543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-06 00:38:34.269554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-06 00:38:34.269565 | orchestrator | 2025-09-06 00:38:34.269575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269586 | orchestrator | Saturday 06 September 2025 00:38:31 +0000 (0:00:00.349) 0:00:19.146 **** 2025-09-06 00:38:34.269597 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269607 | orchestrator | 2025-09-06 00:38:34.269618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269629 | orchestrator | Saturday 06 September 2025 00:38:31 +0000 (0:00:00.203) 0:00:19.350 **** 2025-09-06 00:38:34.269640 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269650 | orchestrator | 2025-09-06 00:38:34.269661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269671 | orchestrator | Saturday 06 September 2025 00:38:32 +0000 (0:00:00.599) 0:00:19.950 **** 2025-09-06 00:38:34.269682 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269693 | orchestrator | 2025-09-06 00:38:34.269704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269715 | orchestrator | Saturday 06 September 2025 00:38:32 +0000 (0:00:00.205) 0:00:20.155 **** 2025-09-06 00:38:34.269725 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269736 | orchestrator | 2025-09-06 00:38:34.269747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269758 | orchestrator | Saturday 06 September 2025 00:38:32 +0000 (0:00:00.205) 0:00:20.360 **** 2025-09-06 00:38:34.269769 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269779 | orchestrator | 2025-09-06 00:38:34.269790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269801 | orchestrator | Saturday 06 September 2025 00:38:32 +0000 (0:00:00.191) 0:00:20.551 **** 2025-09-06 00:38:34.269812 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269823 | orchestrator | 2025-09-06 00:38:34.269833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269844 | orchestrator | Saturday 06 September 2025 00:38:32 +0000 (0:00:00.194) 0:00:20.745 **** 2025-09-06 00:38:34.269855 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269865 | orchestrator | 2025-09-06 00:38:34.269876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269887 | orchestrator | Saturday 06 September 2025 00:38:33 +0000 (0:00:00.208) 0:00:20.954 **** 2025-09-06 00:38:34.269898 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.269908 | orchestrator | 2025-09-06 00:38:34.269919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.269936 | orchestrator | Saturday 06 September 2025 00:38:33 +0000 (0:00:00.190) 0:00:21.144 **** 2025-09-06 00:38:34.269947 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-06 00:38:34.269959 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-06 00:38:34.269970 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-06 00:38:34.269980 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-06 00:38:34.269991 | orchestrator | 2025-09-06 00:38:34.270002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:34.270012 | orchestrator | Saturday 06 September 2025 00:38:34 +0000 (0:00:00.696) 0:00:21.841 **** 2025-09-06 00:38:34.270096 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:34.270107 | orchestrator | 2025-09-06 00:38:34.270126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:39.728812 | orchestrator | Saturday 06 September 2025 00:38:34 +0000 (0:00:00.198) 0:00:22.040 **** 2025-09-06 00:38:39.728913 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.728931 | orchestrator | 2025-09-06 00:38:39.728944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:39.728955 | orchestrator | Saturday 06 September 2025 00:38:34 +0000 (0:00:00.194) 0:00:22.234 **** 2025-09-06 00:38:39.728966 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.728977 | orchestrator | 2025-09-06 00:38:39.728988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:39.728999 | orchestrator | Saturday 06 September 2025 00:38:34 +0000 (0:00:00.265) 0:00:22.499 **** 2025-09-06 00:38:39.729010 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729020 | orchestrator | 2025-09-06 00:38:39.729088 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-06 00:38:39.729102 | orchestrator | Saturday 06 September 2025 00:38:34 +0000 (0:00:00.195) 0:00:22.695 **** 2025-09-06 00:38:39.729113 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-06 00:38:39.729124 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-06 00:38:39.729135 | orchestrator | 2025-09-06 00:38:39.729146 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-06 00:38:39.729157 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.349) 0:00:23.044 **** 2025-09-06 00:38:39.729168 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729179 | orchestrator | 2025-09-06 00:38:39.729190 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-06 00:38:39.729201 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.142) 0:00:23.186 **** 2025-09-06 00:38:39.729213 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729223 | orchestrator | 2025-09-06 00:38:39.729235 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-06 00:38:39.729245 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.136) 0:00:23.323 **** 2025-09-06 00:38:39.729256 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729267 | orchestrator | 2025-09-06 00:38:39.729278 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-06 00:38:39.729289 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.137) 0:00:23.460 **** 2025-09-06 00:38:39.729300 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:39.729311 | orchestrator | 2025-09-06 00:38:39.729322 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-06 00:38:39.729333 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.159) 0:00:23.620 **** 2025-09-06 00:38:39.729344 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9969153-fa79-5368-8c16-a33775dfe5f6'}}) 2025-09-06 00:38:39.729356 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}}) 2025-09-06 00:38:39.729369 | orchestrator | 2025-09-06 00:38:39.729383 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-06 00:38:39.729418 | orchestrator | Saturday 06 September 2025 00:38:35 +0000 (0:00:00.149) 0:00:23.769 **** 2025-09-06 00:38:39.729433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9969153-fa79-5368-8c16-a33775dfe5f6'}})  2025-09-06 00:38:39.729446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}})  2025-09-06 00:38:39.729459 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729472 | orchestrator | 2025-09-06 00:38:39.729485 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-06 00:38:39.729498 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.131) 0:00:23.901 **** 2025-09-06 00:38:39.729510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9969153-fa79-5368-8c16-a33775dfe5f6'}})  2025-09-06 00:38:39.729523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}})  2025-09-06 00:38:39.729536 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729548 | orchestrator | 2025-09-06 00:38:39.729561 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-06 00:38:39.729574 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.156) 0:00:24.057 **** 2025-09-06 00:38:39.729586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9969153-fa79-5368-8c16-a33775dfe5f6'}})  2025-09-06 00:38:39.729598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}})  2025-09-06 00:38:39.729611 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729624 | orchestrator | 2025-09-06 00:38:39.729636 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-06 00:38:39.729649 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.145) 0:00:24.203 **** 2025-09-06 00:38:39.729661 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:39.729674 | orchestrator | 2025-09-06 00:38:39.729686 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-06 00:38:39.729699 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.135) 0:00:24.339 **** 2025-09-06 00:38:39.729711 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:38:39.729722 | orchestrator | 2025-09-06 00:38:39.729733 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-06 00:38:39.729744 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.140) 0:00:24.479 **** 2025-09-06 00:38:39.729755 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729766 | orchestrator | 2025-09-06 00:38:39.729793 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-06 00:38:39.729805 | orchestrator | Saturday 06 September 2025 00:38:36 +0000 (0:00:00.124) 0:00:24.603 **** 2025-09-06 00:38:39.729816 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729827 | orchestrator | 2025-09-06 00:38:39.729837 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-06 00:38:39.729848 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.238) 0:00:24.842 **** 2025-09-06 00:38:39.729859 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.729870 | orchestrator | 2025-09-06 00:38:39.729881 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-06 00:38:39.729891 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.108) 0:00:24.951 **** 2025-09-06 00:38:39.729902 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:38:39.729913 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:39.729924 | orchestrator |  "sdb": { 2025-09-06 00:38:39.729937 | orchestrator |  "osd_lvm_uuid": "e9969153-fa79-5368-8c16-a33775dfe5f6" 2025-09-06 00:38:39.729948 | orchestrator |  }, 2025-09-06 00:38:39.729959 | orchestrator |  "sdc": { 2025-09-06 00:38:39.729979 | orchestrator |  "osd_lvm_uuid": "473d4611-c66c-5516-9b6d-fd0b18ba2fe0" 2025-09-06 00:38:39.729990 | orchestrator |  } 2025-09-06 00:38:39.730001 | orchestrator |  } 2025-09-06 00:38:39.730012 | orchestrator | } 2025-09-06 00:38:39.730110 | orchestrator | 2025-09-06 00:38:39.730122 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-06 00:38:39.730134 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.111) 0:00:25.062 **** 2025-09-06 00:38:39.730145 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.730156 | orchestrator | 2025-09-06 00:38:39.730174 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-06 00:38:39.730186 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.105) 0:00:25.168 **** 2025-09-06 00:38:39.730196 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.730207 | orchestrator | 2025-09-06 00:38:39.730218 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-06 00:38:39.730229 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.107) 0:00:25.276 **** 2025-09-06 00:38:39.730240 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:38:39.730250 | orchestrator | 2025-09-06 00:38:39.730261 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-06 00:38:39.730272 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.102) 0:00:25.378 **** 2025-09-06 00:38:39.730283 | orchestrator | changed: [testbed-node-4] => { 2025-09-06 00:38:39.730294 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-06 00:38:39.730305 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:39.730316 | orchestrator |  "sdb": { 2025-09-06 00:38:39.730327 | orchestrator |  "osd_lvm_uuid": "e9969153-fa79-5368-8c16-a33775dfe5f6" 2025-09-06 00:38:39.730343 | orchestrator |  }, 2025-09-06 00:38:39.730355 | orchestrator |  "sdc": { 2025-09-06 00:38:39.730366 | orchestrator |  "osd_lvm_uuid": "473d4611-c66c-5516-9b6d-fd0b18ba2fe0" 2025-09-06 00:38:39.730377 | orchestrator |  } 2025-09-06 00:38:39.730388 | orchestrator |  }, 2025-09-06 00:38:39.730398 | orchestrator |  "lvm_volumes": [ 2025-09-06 00:38:39.730409 | orchestrator |  { 2025-09-06 00:38:39.730420 | orchestrator |  "data": "osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6", 2025-09-06 00:38:39.730431 | orchestrator |  "data_vg": "ceph-e9969153-fa79-5368-8c16-a33775dfe5f6" 2025-09-06 00:38:39.730442 | orchestrator |  }, 2025-09-06 00:38:39.730453 | orchestrator |  { 2025-09-06 00:38:39.730464 | orchestrator |  "data": "osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0", 2025-09-06 00:38:39.730475 | orchestrator |  "data_vg": "ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0" 2025-09-06 00:38:39.730486 | orchestrator |  } 2025-09-06 00:38:39.730496 | orchestrator |  ] 2025-09-06 00:38:39.730507 | orchestrator |  } 2025-09-06 00:38:39.730518 | orchestrator | } 2025-09-06 00:38:39.730529 | orchestrator | 2025-09-06 00:38:39.730540 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-06 00:38:39.730551 | orchestrator | Saturday 06 September 2025 00:38:37 +0000 (0:00:00.168) 0:00:25.546 **** 2025-09-06 00:38:39.730562 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:39.730573 | orchestrator | 2025-09-06 00:38:39.730584 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-06 00:38:39.730595 | orchestrator | 2025-09-06 00:38:39.730606 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:38:39.730617 | orchestrator | Saturday 06 September 2025 00:38:38 +0000 (0:00:00.878) 0:00:26.425 **** 2025-09-06 00:38:39.730627 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:39.730638 | orchestrator | 2025-09-06 00:38:39.730649 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:38:39.730660 | orchestrator | Saturday 06 September 2025 00:38:38 +0000 (0:00:00.340) 0:00:26.765 **** 2025-09-06 00:38:39.730679 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:39.730690 | orchestrator | 2025-09-06 00:38:39.730701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:39.730712 | orchestrator | Saturday 06 September 2025 00:38:39 +0000 (0:00:00.445) 0:00:27.210 **** 2025-09-06 00:38:39.730723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-06 00:38:39.730734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-06 00:38:39.730745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-06 00:38:39.730756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-06 00:38:39.730766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-06 00:38:39.730777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-06 00:38:39.730796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-06 00:38:46.475862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-06 00:38:46.475961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-06 00:38:46.475997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-06 00:38:46.476009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-06 00:38:46.476020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-06 00:38:46.476030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-06 00:38:46.476081 | orchestrator | 2025-09-06 00:38:46.476093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476105 | orchestrator | Saturday 06 September 2025 00:38:39 +0000 (0:00:00.288) 0:00:27.499 **** 2025-09-06 00:38:46.476116 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476128 | orchestrator | 2025-09-06 00:38:46.476139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476149 | orchestrator | Saturday 06 September 2025 00:38:39 +0000 (0:00:00.153) 0:00:27.653 **** 2025-09-06 00:38:46.476160 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476171 | orchestrator | 2025-09-06 00:38:46.476192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476204 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.165) 0:00:27.819 **** 2025-09-06 00:38:46.476214 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476251 | orchestrator | 2025-09-06 00:38:46.476263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476274 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.190) 0:00:28.010 **** 2025-09-06 00:38:46.476295 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476306 | orchestrator | 2025-09-06 00:38:46.476317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476351 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.169) 0:00:28.180 **** 2025-09-06 00:38:46.476363 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476373 | orchestrator | 2025-09-06 00:38:46.476396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476418 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.150) 0:00:28.330 **** 2025-09-06 00:38:46.476444 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476457 | orchestrator | 2025-09-06 00:38:46.476470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476483 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.145) 0:00:28.475 **** 2025-09-06 00:38:46.476495 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476534 | orchestrator | 2025-09-06 00:38:46.476546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476559 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.130) 0:00:28.606 **** 2025-09-06 00:38:46.476571 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.476583 | orchestrator | 2025-09-06 00:38:46.476612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476626 | orchestrator | Saturday 06 September 2025 00:38:40 +0000 (0:00:00.142) 0:00:28.748 **** 2025-09-06 00:38:46.476639 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626) 2025-09-06 00:38:46.476653 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626) 2025-09-06 00:38:46.476665 | orchestrator | 2025-09-06 00:38:46.476677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476691 | orchestrator | Saturday 06 September 2025 00:38:41 +0000 (0:00:00.463) 0:00:29.211 **** 2025-09-06 00:38:46.476703 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b) 2025-09-06 00:38:46.476715 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b) 2025-09-06 00:38:46.476728 | orchestrator | 2025-09-06 00:38:46.476741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476753 | orchestrator | Saturday 06 September 2025 00:38:42 +0000 (0:00:00.646) 0:00:29.858 **** 2025-09-06 00:38:46.476766 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4) 2025-09-06 00:38:46.476777 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4) 2025-09-06 00:38:46.476788 | orchestrator | 2025-09-06 00:38:46.476799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476809 | orchestrator | Saturday 06 September 2025 00:38:42 +0000 (0:00:00.382) 0:00:30.241 **** 2025-09-06 00:38:46.476820 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634) 2025-09-06 00:38:46.476831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634) 2025-09-06 00:38:46.476841 | orchestrator | 2025-09-06 00:38:46.476852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:38:46.476863 | orchestrator | Saturday 06 September 2025 00:38:42 +0000 (0:00:00.380) 0:00:30.621 **** 2025-09-06 00:38:46.476873 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:38:46.476884 | orchestrator | 2025-09-06 00:38:46.476895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.476906 | orchestrator | Saturday 06 September 2025 00:38:43 +0000 (0:00:00.301) 0:00:30.922 **** 2025-09-06 00:38:46.476954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-06 00:38:46.476983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-06 00:38:46.477006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-06 00:38:46.477017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-06 00:38:46.477057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-06 00:38:46.477069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-06 00:38:46.477080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-06 00:38:46.477090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-06 00:38:46.477101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-06 00:38:46.477147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-06 00:38:46.477159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-06 00:38:46.477169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-06 00:38:46.477180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-06 00:38:46.477190 | orchestrator | 2025-09-06 00:38:46.477201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477223 | orchestrator | Saturday 06 September 2025 00:38:43 +0000 (0:00:00.328) 0:00:31.251 **** 2025-09-06 00:38:46.477234 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477244 | orchestrator | 2025-09-06 00:38:46.477255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477266 | orchestrator | Saturday 06 September 2025 00:38:43 +0000 (0:00:00.184) 0:00:31.436 **** 2025-09-06 00:38:46.477276 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477287 | orchestrator | 2025-09-06 00:38:46.477298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477308 | orchestrator | Saturday 06 September 2025 00:38:43 +0000 (0:00:00.181) 0:00:31.617 **** 2025-09-06 00:38:46.477319 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477329 | orchestrator | 2025-09-06 00:38:46.477340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477351 | orchestrator | Saturday 06 September 2025 00:38:44 +0000 (0:00:00.168) 0:00:31.786 **** 2025-09-06 00:38:46.477361 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477372 | orchestrator | 2025-09-06 00:38:46.477382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477393 | orchestrator | Saturday 06 September 2025 00:38:44 +0000 (0:00:00.185) 0:00:31.972 **** 2025-09-06 00:38:46.477403 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477414 | orchestrator | 2025-09-06 00:38:46.477435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477459 | orchestrator | Saturday 06 September 2025 00:38:44 +0000 (0:00:00.182) 0:00:32.154 **** 2025-09-06 00:38:46.477470 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477481 | orchestrator | 2025-09-06 00:38:46.477502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477514 | orchestrator | Saturday 06 September 2025 00:38:44 +0000 (0:00:00.435) 0:00:32.589 **** 2025-09-06 00:38:46.477524 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477535 | orchestrator | 2025-09-06 00:38:46.477545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477556 | orchestrator | Saturday 06 September 2025 00:38:44 +0000 (0:00:00.172) 0:00:32.762 **** 2025-09-06 00:38:46.477567 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477595 | orchestrator | 2025-09-06 00:38:46.477606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477617 | orchestrator | Saturday 06 September 2025 00:38:45 +0000 (0:00:00.206) 0:00:32.968 **** 2025-09-06 00:38:46.477627 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-06 00:38:46.477638 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-06 00:38:46.477649 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-06 00:38:46.477659 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-06 00:38:46.477669 | orchestrator | 2025-09-06 00:38:46.477680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477691 | orchestrator | Saturday 06 September 2025 00:38:45 +0000 (0:00:00.591) 0:00:33.560 **** 2025-09-06 00:38:46.477702 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477712 | orchestrator | 2025-09-06 00:38:46.477723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477741 | orchestrator | Saturday 06 September 2025 00:38:45 +0000 (0:00:00.185) 0:00:33.746 **** 2025-09-06 00:38:46.477751 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477762 | orchestrator | 2025-09-06 00:38:46.477772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477783 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.177) 0:00:33.924 **** 2025-09-06 00:38:46.477794 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477804 | orchestrator | 2025-09-06 00:38:46.477815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:38:46.477825 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.164) 0:00:34.088 **** 2025-09-06 00:38:46.477842 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:46.477853 | orchestrator | 2025-09-06 00:38:46.477864 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-06 00:38:46.477881 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.160) 0:00:34.248 **** 2025-09-06 00:38:50.417870 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-06 00:38:50.417979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-06 00:38:50.417995 | orchestrator | 2025-09-06 00:38:50.418008 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-06 00:38:50.418102 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.160) 0:00:34.409 **** 2025-09-06 00:38:50.418115 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418127 | orchestrator | 2025-09-06 00:38:50.418138 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-06 00:38:50.418149 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.117) 0:00:34.526 **** 2025-09-06 00:38:50.418160 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418171 | orchestrator | 2025-09-06 00:38:50.418182 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-06 00:38:50.418193 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.103) 0:00:34.630 **** 2025-09-06 00:38:50.418204 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418214 | orchestrator | 2025-09-06 00:38:50.418225 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-06 00:38:50.418236 | orchestrator | Saturday 06 September 2025 00:38:46 +0000 (0:00:00.106) 0:00:34.737 **** 2025-09-06 00:38:50.418247 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:50.418258 | orchestrator | 2025-09-06 00:38:50.418270 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-06 00:38:50.418280 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.292) 0:00:35.029 **** 2025-09-06 00:38:50.418293 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}}) 2025-09-06 00:38:50.418305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd801673f-a74f-56ad-ad0d-e97588ff4709'}}) 2025-09-06 00:38:50.418315 | orchestrator | 2025-09-06 00:38:50.418326 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-06 00:38:50.418337 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.141) 0:00:35.171 **** 2025-09-06 00:38:50.418348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}})  2025-09-06 00:38:50.418361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd801673f-a74f-56ad-ad0d-e97588ff4709'}})  2025-09-06 00:38:50.418372 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418382 | orchestrator | 2025-09-06 00:38:50.418412 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-06 00:38:50.418423 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.122) 0:00:35.293 **** 2025-09-06 00:38:50.418434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}})  2025-09-06 00:38:50.418479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd801673f-a74f-56ad-ad0d-e97588ff4709'}})  2025-09-06 00:38:50.418490 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418501 | orchestrator | 2025-09-06 00:38:50.418512 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-06 00:38:50.418522 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.139) 0:00:35.433 **** 2025-09-06 00:38:50.418533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}})  2025-09-06 00:38:50.418544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd801673f-a74f-56ad-ad0d-e97588ff4709'}})  2025-09-06 00:38:50.418554 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418565 | orchestrator | 2025-09-06 00:38:50.418576 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-06 00:38:50.418586 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.157) 0:00:35.591 **** 2025-09-06 00:38:50.418597 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:50.418607 | orchestrator | 2025-09-06 00:38:50.418618 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-06 00:38:50.418628 | orchestrator | Saturday 06 September 2025 00:38:47 +0000 (0:00:00.132) 0:00:35.723 **** 2025-09-06 00:38:50.418639 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:38:50.418650 | orchestrator | 2025-09-06 00:38:50.418660 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-06 00:38:50.418671 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.116) 0:00:35.840 **** 2025-09-06 00:38:50.418681 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418692 | orchestrator | 2025-09-06 00:38:50.418702 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-06 00:38:50.418713 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.134) 0:00:35.974 **** 2025-09-06 00:38:50.418723 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418734 | orchestrator | 2025-09-06 00:38:50.418745 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-06 00:38:50.418755 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.140) 0:00:36.115 **** 2025-09-06 00:38:50.418766 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418776 | orchestrator | 2025-09-06 00:38:50.418787 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-06 00:38:50.418797 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.125) 0:00:36.240 **** 2025-09-06 00:38:50.418808 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:38:50.418819 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:50.418829 | orchestrator |  "sdb": { 2025-09-06 00:38:50.418841 | orchestrator |  "osd_lvm_uuid": "6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f" 2025-09-06 00:38:50.418870 | orchestrator |  }, 2025-09-06 00:38:50.418882 | orchestrator |  "sdc": { 2025-09-06 00:38:50.418893 | orchestrator |  "osd_lvm_uuid": "d801673f-a74f-56ad-ad0d-e97588ff4709" 2025-09-06 00:38:50.418904 | orchestrator |  } 2025-09-06 00:38:50.418915 | orchestrator |  } 2025-09-06 00:38:50.418926 | orchestrator | } 2025-09-06 00:38:50.418937 | orchestrator | 2025-09-06 00:38:50.418948 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-06 00:38:50.418959 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.162) 0:00:36.402 **** 2025-09-06 00:38:50.418970 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.418980 | orchestrator | 2025-09-06 00:38:50.418991 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-06 00:38:50.419001 | orchestrator | Saturday 06 September 2025 00:38:48 +0000 (0:00:00.148) 0:00:36.551 **** 2025-09-06 00:38:50.419012 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.419022 | orchestrator | 2025-09-06 00:38:50.419051 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-06 00:38:50.419072 | orchestrator | Saturday 06 September 2025 00:38:49 +0000 (0:00:00.323) 0:00:36.874 **** 2025-09-06 00:38:50.419083 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:38:50.419094 | orchestrator | 2025-09-06 00:38:50.419105 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-06 00:38:50.419115 | orchestrator | Saturday 06 September 2025 00:38:49 +0000 (0:00:00.152) 0:00:37.027 **** 2025-09-06 00:38:50.419126 | orchestrator | changed: [testbed-node-5] => { 2025-09-06 00:38:50.419137 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-06 00:38:50.419147 | orchestrator |  "ceph_osd_devices": { 2025-09-06 00:38:50.419158 | orchestrator |  "sdb": { 2025-09-06 00:38:50.419169 | orchestrator |  "osd_lvm_uuid": "6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f" 2025-09-06 00:38:50.419180 | orchestrator |  }, 2025-09-06 00:38:50.419191 | orchestrator |  "sdc": { 2025-09-06 00:38:50.419201 | orchestrator |  "osd_lvm_uuid": "d801673f-a74f-56ad-ad0d-e97588ff4709" 2025-09-06 00:38:50.419212 | orchestrator |  } 2025-09-06 00:38:50.419223 | orchestrator |  }, 2025-09-06 00:38:50.419234 | orchestrator |  "lvm_volumes": [ 2025-09-06 00:38:50.419245 | orchestrator |  { 2025-09-06 00:38:50.419255 | orchestrator |  "data": "osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f", 2025-09-06 00:38:50.419266 | orchestrator |  "data_vg": "ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f" 2025-09-06 00:38:50.419277 | orchestrator |  }, 2025-09-06 00:38:50.419288 | orchestrator |  { 2025-09-06 00:38:50.419298 | orchestrator |  "data": "osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709", 2025-09-06 00:38:50.419309 | orchestrator |  "data_vg": "ceph-d801673f-a74f-56ad-ad0d-e97588ff4709" 2025-09-06 00:38:50.419321 | orchestrator |  } 2025-09-06 00:38:50.419331 | orchestrator |  ] 2025-09-06 00:38:50.419342 | orchestrator |  } 2025-09-06 00:38:50.419357 | orchestrator | } 2025-09-06 00:38:50.419368 | orchestrator | 2025-09-06 00:38:50.419379 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-06 00:38:50.419390 | orchestrator | Saturday 06 September 2025 00:38:49 +0000 (0:00:00.194) 0:00:37.222 **** 2025-09-06 00:38:50.419401 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-06 00:38:50.419411 | orchestrator | 2025-09-06 00:38:50.419422 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:38:50.419442 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 00:38:50.419454 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 00:38:50.419465 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 00:38:50.419476 | orchestrator | 2025-09-06 00:38:50.419487 | orchestrator | 2025-09-06 00:38:50.419498 | orchestrator | 2025-09-06 00:38:50.419508 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:38:50.419519 | orchestrator | Saturday 06 September 2025 00:38:50 +0000 (0:00:00.945) 0:00:38.167 **** 2025-09-06 00:38:50.419530 | orchestrator | =============================================================================== 2025-09-06 00:38:50.419541 | orchestrator | Write configuration file ------------------------------------------------ 3.88s 2025-09-06 00:38:50.419551 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-09-06 00:38:50.419562 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2025-09-06 00:38:50.419573 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-09-06 00:38:50.419584 | orchestrator | Get initial list of available block devices ----------------------------- 0.89s 2025-09-06 00:38:50.419600 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-09-06 00:38:50.419611 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-06 00:38:50.419622 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-09-06 00:38:50.419633 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-06 00:38:50.419644 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-06 00:38:50.419654 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2025-09-06 00:38:50.419665 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-06 00:38:50.419676 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-09-06 00:38:50.419686 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-09-06 00:38:50.419705 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.59s 2025-09-06 00:38:50.784836 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-09-06 00:38:50.784939 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-09-06 00:38:50.784953 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2025-09-06 00:38:50.784965 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-09-06 00:38:50.784977 | orchestrator | Set WAL devices config data --------------------------------------------- 0.51s 2025-09-06 00:39:13.214521 | orchestrator | 2025-09-06 00:39:13 | INFO  | Task b5da78d9-6016-4854-acc2-0ca3fc5cb997 (sync inventory) is running in background. Output coming soon. 2025-09-06 00:39:36.110122 | orchestrator | 2025-09-06 00:39:14 | INFO  | Starting group_vars file reorganization 2025-09-06 00:39:36.110193 | orchestrator | 2025-09-06 00:39:14 | INFO  | Moved 0 file(s) to their respective directories 2025-09-06 00:39:36.110203 | orchestrator | 2025-09-06 00:39:14 | INFO  | Group_vars file reorganization completed 2025-09-06 00:39:36.110210 | orchestrator | 2025-09-06 00:39:16 | INFO  | Starting variable preparation from inventory 2025-09-06 00:39:36.110218 | orchestrator | 2025-09-06 00:39:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-06 00:39:36.110225 | orchestrator | 2025-09-06 00:39:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-06 00:39:36.110232 | orchestrator | 2025-09-06 00:39:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-06 00:39:36.110239 | orchestrator | 2025-09-06 00:39:19 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-06 00:39:36.110247 | orchestrator | 2025-09-06 00:39:19 | INFO  | Variable preparation completed 2025-09-06 00:39:36.110254 | orchestrator | 2025-09-06 00:39:20 | INFO  | Starting inventory overwrite handling 2025-09-06 00:39:36.110261 | orchestrator | 2025-09-06 00:39:20 | INFO  | Handling group overwrites in 99-overwrite 2025-09-06 00:39:36.110269 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group frr:children from 60-generic 2025-09-06 00:39:36.110276 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group storage:children from 50-kolla 2025-09-06 00:39:36.110283 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-06 00:39:36.110290 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-06 00:39:36.110298 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-06 00:39:36.110305 | orchestrator | 2025-09-06 00:39:20 | INFO  | Handling group overwrites in 20-roles 2025-09-06 00:39:36.110312 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-06 00:39:36.110336 | orchestrator | 2025-09-06 00:39:20 | INFO  | Removed 6 group(s) in total 2025-09-06 00:39:36.110343 | orchestrator | 2025-09-06 00:39:20 | INFO  | Inventory overwrite handling completed 2025-09-06 00:39:36.110350 | orchestrator | 2025-09-06 00:39:21 | INFO  | Starting merge of inventory files 2025-09-06 00:39:36.110357 | orchestrator | 2025-09-06 00:39:21 | INFO  | Inventory files merged successfully 2025-09-06 00:39:36.110364 | orchestrator | 2025-09-06 00:39:25 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-06 00:39:36.110371 | orchestrator | 2025-09-06 00:39:35 | INFO  | Successfully wrote ClusterShell configuration 2025-09-06 00:39:36.110379 | orchestrator | [master b0d07a7] 2025-09-06-00-39 2025-09-06 00:39:36.110386 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-06 00:39:37.668714 | orchestrator | 2025-09-06 00:39:37 | INFO  | Task f3e9f4f6-6f77-4b05-978e-834385156bcf (ceph-create-lvm-devices) was prepared for execution. 2025-09-06 00:39:37.668821 | orchestrator | 2025-09-06 00:39:37 | INFO  | It takes a moment until task f3e9f4f6-6f77-4b05-978e-834385156bcf (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-06 00:39:48.002319 | orchestrator | 2025-09-06 00:39:48.002414 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-06 00:39:48.002430 | orchestrator | 2025-09-06 00:39:48.002442 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:39:48.002454 | orchestrator | Saturday 06 September 2025 00:39:41 +0000 (0:00:00.252) 0:00:00.252 **** 2025-09-06 00:39:48.002482 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-06 00:39:48.002494 | orchestrator | 2025-09-06 00:39:48.002505 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:39:48.002516 | orchestrator | Saturday 06 September 2025 00:39:41 +0000 (0:00:00.213) 0:00:00.465 **** 2025-09-06 00:39:48.002527 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:39:48.002539 | orchestrator | 2025-09-06 00:39:48.002549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.002560 | orchestrator | Saturday 06 September 2025 00:39:41 +0000 (0:00:00.206) 0:00:00.672 **** 2025-09-06 00:39:48.002571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-06 00:39:48.002583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-06 00:39:48.002594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-06 00:39:48.002605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-06 00:39:48.002615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-06 00:39:48.002626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-06 00:39:48.002637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-06 00:39:48.002648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-06 00:39:48.002658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-06 00:39:48.002669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-06 00:39:48.002680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-06 00:39:48.002690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-06 00:39:48.002701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-06 00:39:48.002712 | orchestrator | 2025-09-06 00:39:48.002722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.002754 | orchestrator | Saturday 06 September 2025 00:39:42 +0000 (0:00:00.358) 0:00:01.030 **** 2025-09-06 00:39:48.002765 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.002776 | orchestrator | 2025-09-06 00:39:48.002787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.002810 | orchestrator | Saturday 06 September 2025 00:39:42 +0000 (0:00:00.309) 0:00:01.340 **** 2025-09-06 00:39:48.002822 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.002832 | orchestrator | 2025-09-06 00:39:48.002843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.002854 | orchestrator | Saturday 06 September 2025 00:39:42 +0000 (0:00:00.164) 0:00:01.505 **** 2025-09-06 00:39:48.002871 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.002885 | orchestrator | 2025-09-06 00:39:48.002898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.002911 | orchestrator | Saturday 06 September 2025 00:39:42 +0000 (0:00:00.165) 0:00:01.670 **** 2025-09-06 00:39:48.002924 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.002937 | orchestrator | 2025-09-06 00:39:48.002950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003020 | orchestrator | Saturday 06 September 2025 00:39:42 +0000 (0:00:00.194) 0:00:01.865 **** 2025-09-06 00:39:48.003036 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003049 | orchestrator | 2025-09-06 00:39:48.003063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003076 | orchestrator | Saturday 06 September 2025 00:39:43 +0000 (0:00:00.178) 0:00:02.043 **** 2025-09-06 00:39:48.003089 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003102 | orchestrator | 2025-09-06 00:39:48.003115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003127 | orchestrator | Saturday 06 September 2025 00:39:43 +0000 (0:00:00.175) 0:00:02.219 **** 2025-09-06 00:39:48.003140 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003152 | orchestrator | 2025-09-06 00:39:48.003166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003179 | orchestrator | Saturday 06 September 2025 00:39:43 +0000 (0:00:00.180) 0:00:02.400 **** 2025-09-06 00:39:48.003191 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003204 | orchestrator | 2025-09-06 00:39:48.003217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003230 | orchestrator | Saturday 06 September 2025 00:39:43 +0000 (0:00:00.163) 0:00:02.563 **** 2025-09-06 00:39:48.003241 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20) 2025-09-06 00:39:48.003252 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20) 2025-09-06 00:39:48.003263 | orchestrator | 2025-09-06 00:39:48.003274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003285 | orchestrator | Saturday 06 September 2025 00:39:44 +0000 (0:00:00.365) 0:00:02.928 **** 2025-09-06 00:39:48.003312 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8) 2025-09-06 00:39:48.003324 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8) 2025-09-06 00:39:48.003334 | orchestrator | 2025-09-06 00:39:48.003345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003356 | orchestrator | Saturday 06 September 2025 00:39:44 +0000 (0:00:00.352) 0:00:03.281 **** 2025-09-06 00:39:48.003366 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff) 2025-09-06 00:39:48.003377 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff) 2025-09-06 00:39:48.003388 | orchestrator | 2025-09-06 00:39:48.003398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003418 | orchestrator | Saturday 06 September 2025 00:39:45 +0000 (0:00:00.617) 0:00:03.898 **** 2025-09-06 00:39:48.003428 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5) 2025-09-06 00:39:48.003439 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5) 2025-09-06 00:39:48.003450 | orchestrator | 2025-09-06 00:39:48.003461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:39:48.003471 | orchestrator | Saturday 06 September 2025 00:39:45 +0000 (0:00:00.754) 0:00:04.652 **** 2025-09-06 00:39:48.003482 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:39:48.003493 | orchestrator | 2025-09-06 00:39:48.003503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003514 | orchestrator | Saturday 06 September 2025 00:39:46 +0000 (0:00:00.304) 0:00:04.957 **** 2025-09-06 00:39:48.003524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-06 00:39:48.003535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-06 00:39:48.003545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-06 00:39:48.003556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-06 00:39:48.003567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-06 00:39:48.003577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-06 00:39:48.003588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-06 00:39:48.003598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-06 00:39:48.003609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-06 00:39:48.003619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-06 00:39:48.003630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-06 00:39:48.003640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-06 00:39:48.003651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-06 00:39:48.003661 | orchestrator | 2025-09-06 00:39:48.003672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003683 | orchestrator | Saturday 06 September 2025 00:39:46 +0000 (0:00:00.405) 0:00:05.363 **** 2025-09-06 00:39:48.003693 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003704 | orchestrator | 2025-09-06 00:39:48.003715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003726 | orchestrator | Saturday 06 September 2025 00:39:46 +0000 (0:00:00.189) 0:00:05.552 **** 2025-09-06 00:39:48.003736 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003747 | orchestrator | 2025-09-06 00:39:48.003757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003768 | orchestrator | Saturday 06 September 2025 00:39:46 +0000 (0:00:00.210) 0:00:05.762 **** 2025-09-06 00:39:48.003779 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003789 | orchestrator | 2025-09-06 00:39:48.003800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003810 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.187) 0:00:05.949 **** 2025-09-06 00:39:48.003821 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003832 | orchestrator | 2025-09-06 00:39:48.003842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003859 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.192) 0:00:06.142 **** 2025-09-06 00:39:48.003869 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003880 | orchestrator | 2025-09-06 00:39:48.003891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003901 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.188) 0:00:06.331 **** 2025-09-06 00:39:48.003912 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003922 | orchestrator | 2025-09-06 00:39:48.003933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.003944 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.191) 0:00:06.523 **** 2025-09-06 00:39:48.003954 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:48.003979 | orchestrator | 2025-09-06 00:39:48.003991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:48.004002 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.184) 0:00:06.708 **** 2025-09-06 00:39:48.004019 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.716951 | orchestrator | 2025-09-06 00:39:55.717098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:55.717115 | orchestrator | Saturday 06 September 2025 00:39:47 +0000 (0:00:00.183) 0:00:06.891 **** 2025-09-06 00:39:55.717128 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-06 00:39:55.717140 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-06 00:39:55.717152 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-06 00:39:55.717162 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-06 00:39:55.717173 | orchestrator | 2025-09-06 00:39:55.717184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:55.717196 | orchestrator | Saturday 06 September 2025 00:39:48 +0000 (0:00:00.942) 0:00:07.834 **** 2025-09-06 00:39:55.717206 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717217 | orchestrator | 2025-09-06 00:39:55.717228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:55.717239 | orchestrator | Saturday 06 September 2025 00:39:49 +0000 (0:00:00.212) 0:00:08.047 **** 2025-09-06 00:39:55.717250 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717261 | orchestrator | 2025-09-06 00:39:55.717272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:55.717282 | orchestrator | Saturday 06 September 2025 00:39:49 +0000 (0:00:00.211) 0:00:08.259 **** 2025-09-06 00:39:55.717293 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717304 | orchestrator | 2025-09-06 00:39:55.717315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:39:55.717326 | orchestrator | Saturday 06 September 2025 00:39:49 +0000 (0:00:00.205) 0:00:08.464 **** 2025-09-06 00:39:55.717337 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717348 | orchestrator | 2025-09-06 00:39:55.717358 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-06 00:39:55.717369 | orchestrator | Saturday 06 September 2025 00:39:49 +0000 (0:00:00.184) 0:00:08.649 **** 2025-09-06 00:39:55.717380 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717390 | orchestrator | 2025-09-06 00:39:55.717401 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-06 00:39:55.717412 | orchestrator | Saturday 06 September 2025 00:39:49 +0000 (0:00:00.140) 0:00:08.790 **** 2025-09-06 00:39:55.717423 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}}) 2025-09-06 00:39:55.717434 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e6b4ea58-4fde-56e5-979f-346e927a82c3'}}) 2025-09-06 00:39:55.717445 | orchestrator | 2025-09-06 00:39:55.717456 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-06 00:39:55.717467 | orchestrator | Saturday 06 September 2025 00:39:50 +0000 (0:00:00.184) 0:00:08.974 **** 2025-09-06 00:39:55.717481 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}) 2025-09-06 00:39:55.717522 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'}) 2025-09-06 00:39:55.717535 | orchestrator | 2025-09-06 00:39:55.717564 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-06 00:39:55.717587 | orchestrator | Saturday 06 September 2025 00:39:52 +0000 (0:00:01.923) 0:00:10.897 **** 2025-09-06 00:39:55.717600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.717614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.717627 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717639 | orchestrator | 2025-09-06 00:39:55.717653 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-06 00:39:55.717665 | orchestrator | Saturday 06 September 2025 00:39:52 +0000 (0:00:00.179) 0:00:11.077 **** 2025-09-06 00:39:55.717677 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}) 2025-09-06 00:39:55.717691 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'}) 2025-09-06 00:39:55.717703 | orchestrator | 2025-09-06 00:39:55.717716 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-06 00:39:55.717728 | orchestrator | Saturday 06 September 2025 00:39:53 +0000 (0:00:01.372) 0:00:12.450 **** 2025-09-06 00:39:55.717741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.717754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.717767 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717780 | orchestrator | 2025-09-06 00:39:55.717793 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-06 00:39:55.717806 | orchestrator | Saturday 06 September 2025 00:39:53 +0000 (0:00:00.160) 0:00:12.610 **** 2025-09-06 00:39:55.717819 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717832 | orchestrator | 2025-09-06 00:39:55.717843 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-06 00:39:55.717871 | orchestrator | Saturday 06 September 2025 00:39:53 +0000 (0:00:00.140) 0:00:12.751 **** 2025-09-06 00:39:55.717884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.717895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.717905 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717916 | orchestrator | 2025-09-06 00:39:55.717927 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-06 00:39:55.717938 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.407) 0:00:13.159 **** 2025-09-06 00:39:55.717948 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.717978 | orchestrator | 2025-09-06 00:39:55.717989 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-06 00:39:55.718001 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.139) 0:00:13.299 **** 2025-09-06 00:39:55.718011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.718081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.718093 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718104 | orchestrator | 2025-09-06 00:39:55.718114 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-06 00:39:55.718125 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.151) 0:00:13.451 **** 2025-09-06 00:39:55.718136 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718147 | orchestrator | 2025-09-06 00:39:55.718157 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-06 00:39:55.718168 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.138) 0:00:13.590 **** 2025-09-06 00:39:55.718179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.718190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.718201 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718212 | orchestrator | 2025-09-06 00:39:55.718223 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-06 00:39:55.718234 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.148) 0:00:13.739 **** 2025-09-06 00:39:55.718245 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:39:55.718256 | orchestrator | 2025-09-06 00:39:55.718266 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-06 00:39:55.718277 | orchestrator | Saturday 06 September 2025 00:39:54 +0000 (0:00:00.133) 0:00:13.872 **** 2025-09-06 00:39:55.718294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.718305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.718316 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718327 | orchestrator | 2025-09-06 00:39:55.718338 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-06 00:39:55.718348 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.144) 0:00:14.017 **** 2025-09-06 00:39:55.718359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.718371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.718381 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718392 | orchestrator | 2025-09-06 00:39:55.718403 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-06 00:39:55.718414 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.153) 0:00:14.171 **** 2025-09-06 00:39:55.718425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:39:55.718436 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:39:55.718446 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718457 | orchestrator | 2025-09-06 00:39:55.718468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-06 00:39:55.718479 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.162) 0:00:14.333 **** 2025-09-06 00:39:55.718490 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718507 | orchestrator | 2025-09-06 00:39:55.718518 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-06 00:39:55.718529 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.135) 0:00:14.468 **** 2025-09-06 00:39:55.718540 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:39:55.718551 | orchestrator | 2025-09-06 00:39:55.718569 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-06 00:40:02.535026 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.137) 0:00:14.606 **** 2025-09-06 00:40:02.535139 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535156 | orchestrator | 2025-09-06 00:40:02.535169 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-06 00:40:02.535181 | orchestrator | Saturday 06 September 2025 00:39:55 +0000 (0:00:00.156) 0:00:14.763 **** 2025-09-06 00:40:02.535192 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:40:02.535203 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-06 00:40:02.535215 | orchestrator | } 2025-09-06 00:40:02.535226 | orchestrator | 2025-09-06 00:40:02.535237 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-06 00:40:02.535248 | orchestrator | Saturday 06 September 2025 00:39:56 +0000 (0:00:00.418) 0:00:15.181 **** 2025-09-06 00:40:02.535260 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:40:02.535271 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-06 00:40:02.535282 | orchestrator | } 2025-09-06 00:40:02.535293 | orchestrator | 2025-09-06 00:40:02.535304 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-06 00:40:02.535315 | orchestrator | Saturday 06 September 2025 00:39:56 +0000 (0:00:00.162) 0:00:15.343 **** 2025-09-06 00:40:02.535326 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:40:02.535337 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-06 00:40:02.535348 | orchestrator | } 2025-09-06 00:40:02.535360 | orchestrator | 2025-09-06 00:40:02.535372 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-06 00:40:02.535383 | orchestrator | Saturday 06 September 2025 00:39:56 +0000 (0:00:00.176) 0:00:15.520 **** 2025-09-06 00:40:02.535394 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:02.535405 | orchestrator | 2025-09-06 00:40:02.535416 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-06 00:40:02.535442 | orchestrator | Saturday 06 September 2025 00:39:57 +0000 (0:00:00.719) 0:00:16.239 **** 2025-09-06 00:40:02.535453 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:02.535464 | orchestrator | 2025-09-06 00:40:02.535475 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-06 00:40:02.535486 | orchestrator | Saturday 06 September 2025 00:39:57 +0000 (0:00:00.514) 0:00:16.754 **** 2025-09-06 00:40:02.535509 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:02.535522 | orchestrator | 2025-09-06 00:40:02.535535 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-06 00:40:02.535549 | orchestrator | Saturday 06 September 2025 00:39:58 +0000 (0:00:00.512) 0:00:17.266 **** 2025-09-06 00:40:02.535561 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:02.535574 | orchestrator | 2025-09-06 00:40:02.535587 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-06 00:40:02.535600 | orchestrator | Saturday 06 September 2025 00:39:58 +0000 (0:00:00.143) 0:00:17.410 **** 2025-09-06 00:40:02.535613 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535626 | orchestrator | 2025-09-06 00:40:02.535639 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-06 00:40:02.535652 | orchestrator | Saturday 06 September 2025 00:39:58 +0000 (0:00:00.110) 0:00:17.521 **** 2025-09-06 00:40:02.535664 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535678 | orchestrator | 2025-09-06 00:40:02.535691 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-06 00:40:02.535703 | orchestrator | Saturday 06 September 2025 00:39:58 +0000 (0:00:00.112) 0:00:17.634 **** 2025-09-06 00:40:02.535716 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:40:02.535752 | orchestrator |  "vgs_report": { 2025-09-06 00:40:02.535767 | orchestrator |  "vg": [] 2025-09-06 00:40:02.535780 | orchestrator |  } 2025-09-06 00:40:02.535793 | orchestrator | } 2025-09-06 00:40:02.535807 | orchestrator | 2025-09-06 00:40:02.535820 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-06 00:40:02.535833 | orchestrator | Saturday 06 September 2025 00:39:58 +0000 (0:00:00.171) 0:00:17.805 **** 2025-09-06 00:40:02.535846 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535859 | orchestrator | 2025-09-06 00:40:02.535871 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-06 00:40:02.535882 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.139) 0:00:17.944 **** 2025-09-06 00:40:02.535893 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535904 | orchestrator | 2025-09-06 00:40:02.535914 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-06 00:40:02.535925 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.146) 0:00:18.091 **** 2025-09-06 00:40:02.535936 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.535974 | orchestrator | 2025-09-06 00:40:02.535988 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-06 00:40:02.535999 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.316) 0:00:18.407 **** 2025-09-06 00:40:02.536010 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536020 | orchestrator | 2025-09-06 00:40:02.536031 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-06 00:40:02.536042 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.153) 0:00:18.561 **** 2025-09-06 00:40:02.536053 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536064 | orchestrator | 2025-09-06 00:40:02.536094 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-06 00:40:02.536106 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.140) 0:00:18.701 **** 2025-09-06 00:40:02.536117 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536128 | orchestrator | 2025-09-06 00:40:02.536139 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-06 00:40:02.536150 | orchestrator | Saturday 06 September 2025 00:39:59 +0000 (0:00:00.138) 0:00:18.839 **** 2025-09-06 00:40:02.536161 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536171 | orchestrator | 2025-09-06 00:40:02.536182 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-06 00:40:02.536193 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.176) 0:00:19.016 **** 2025-09-06 00:40:02.536204 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536215 | orchestrator | 2025-09-06 00:40:02.536226 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-06 00:40:02.536255 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.161) 0:00:19.178 **** 2025-09-06 00:40:02.536267 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536278 | orchestrator | 2025-09-06 00:40:02.536289 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-06 00:40:02.536300 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.150) 0:00:19.329 **** 2025-09-06 00:40:02.536311 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536322 | orchestrator | 2025-09-06 00:40:02.536333 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-06 00:40:02.536344 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.148) 0:00:19.478 **** 2025-09-06 00:40:02.536355 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536366 | orchestrator | 2025-09-06 00:40:02.536377 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-06 00:40:02.536388 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.153) 0:00:19.632 **** 2025-09-06 00:40:02.536398 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536409 | orchestrator | 2025-09-06 00:40:02.536430 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-06 00:40:02.536441 | orchestrator | Saturday 06 September 2025 00:40:00 +0000 (0:00:00.145) 0:00:19.777 **** 2025-09-06 00:40:02.536452 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536463 | orchestrator | 2025-09-06 00:40:02.536474 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-06 00:40:02.536485 | orchestrator | Saturday 06 September 2025 00:40:01 +0000 (0:00:00.147) 0:00:19.925 **** 2025-09-06 00:40:02.536496 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536507 | orchestrator | 2025-09-06 00:40:02.536518 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-06 00:40:02.536529 | orchestrator | Saturday 06 September 2025 00:40:01 +0000 (0:00:00.143) 0:00:20.069 **** 2025-09-06 00:40:02.536541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:02.536565 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536575 | orchestrator | 2025-09-06 00:40:02.536587 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-06 00:40:02.536597 | orchestrator | Saturday 06 September 2025 00:40:01 +0000 (0:00:00.344) 0:00:20.413 **** 2025-09-06 00:40:02.536608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:02.536631 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536642 | orchestrator | 2025-09-06 00:40:02.536653 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-06 00:40:02.536663 | orchestrator | Saturday 06 September 2025 00:40:01 +0000 (0:00:00.194) 0:00:20.608 **** 2025-09-06 00:40:02.536681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:02.536703 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536714 | orchestrator | 2025-09-06 00:40:02.536725 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-06 00:40:02.536736 | orchestrator | Saturday 06 September 2025 00:40:01 +0000 (0:00:00.179) 0:00:20.788 **** 2025-09-06 00:40:02.536747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:02.536769 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536780 | orchestrator | 2025-09-06 00:40:02.536791 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-06 00:40:02.536802 | orchestrator | Saturday 06 September 2025 00:40:02 +0000 (0:00:00.222) 0:00:21.011 **** 2025-09-06 00:40:02.536813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:02.536835 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:02.536852 | orchestrator | 2025-09-06 00:40:02.536864 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-06 00:40:02.536874 | orchestrator | Saturday 06 September 2025 00:40:02 +0000 (0:00:00.209) 0:00:21.221 **** 2025-09-06 00:40:02.536885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:02.536903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776159 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776266 | orchestrator | 2025-09-06 00:40:07.776282 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-06 00:40:07.776295 | orchestrator | Saturday 06 September 2025 00:40:02 +0000 (0:00:00.202) 0:00:21.423 **** 2025-09-06 00:40:07.776306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:07.776318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776329 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776340 | orchestrator | 2025-09-06 00:40:07.776351 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-06 00:40:07.776379 | orchestrator | Saturday 06 September 2025 00:40:02 +0000 (0:00:00.192) 0:00:21.616 **** 2025-09-06 00:40:07.776401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:07.776412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776423 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776435 | orchestrator | 2025-09-06 00:40:07.776446 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-06 00:40:07.776456 | orchestrator | Saturday 06 September 2025 00:40:02 +0000 (0:00:00.174) 0:00:21.790 **** 2025-09-06 00:40:07.776467 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:07.776478 | orchestrator | 2025-09-06 00:40:07.776489 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-06 00:40:07.776500 | orchestrator | Saturday 06 September 2025 00:40:03 +0000 (0:00:00.509) 0:00:22.300 **** 2025-09-06 00:40:07.776511 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:07.776522 | orchestrator | 2025-09-06 00:40:07.776532 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-06 00:40:07.776543 | orchestrator | Saturday 06 September 2025 00:40:03 +0000 (0:00:00.509) 0:00:22.809 **** 2025-09-06 00:40:07.776554 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:40:07.776565 | orchestrator | 2025-09-06 00:40:07.776575 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-06 00:40:07.776586 | orchestrator | Saturday 06 September 2025 00:40:04 +0000 (0:00:00.157) 0:00:22.966 **** 2025-09-06 00:40:07.776598 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'vg_name': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}) 2025-09-06 00:40:07.776609 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'vg_name': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'}) 2025-09-06 00:40:07.776620 | orchestrator | 2025-09-06 00:40:07.776631 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-06 00:40:07.776642 | orchestrator | Saturday 06 September 2025 00:40:04 +0000 (0:00:00.191) 0:00:23.158 **** 2025-09-06 00:40:07.776653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:07.776684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776698 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776711 | orchestrator | 2025-09-06 00:40:07.776723 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-06 00:40:07.776736 | orchestrator | Saturday 06 September 2025 00:40:04 +0000 (0:00:00.329) 0:00:23.487 **** 2025-09-06 00:40:07.776749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:07.776762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776775 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776788 | orchestrator | 2025-09-06 00:40:07.776800 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-06 00:40:07.776812 | orchestrator | Saturday 06 September 2025 00:40:04 +0000 (0:00:00.137) 0:00:23.624 **** 2025-09-06 00:40:07.776825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'})  2025-09-06 00:40:07.776839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'})  2025-09-06 00:40:07.776851 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:40:07.776864 | orchestrator | 2025-09-06 00:40:07.776877 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-06 00:40:07.776889 | orchestrator | Saturday 06 September 2025 00:40:04 +0000 (0:00:00.157) 0:00:23.781 **** 2025-09-06 00:40:07.776902 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 00:40:07.776915 | orchestrator |  "lvm_report": { 2025-09-06 00:40:07.776928 | orchestrator |  "lv": [ 2025-09-06 00:40:07.776967 | orchestrator |  { 2025-09-06 00:40:07.776999 | orchestrator |  "lv_name": "osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567", 2025-09-06 00:40:07.777014 | orchestrator |  "vg_name": "ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567" 2025-09-06 00:40:07.777027 | orchestrator |  }, 2025-09-06 00:40:07.777041 | orchestrator |  { 2025-09-06 00:40:07.777055 | orchestrator |  "lv_name": "osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3", 2025-09-06 00:40:07.777068 | orchestrator |  "vg_name": "ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3" 2025-09-06 00:40:07.777080 | orchestrator |  } 2025-09-06 00:40:07.777091 | orchestrator |  ], 2025-09-06 00:40:07.777102 | orchestrator |  "pv": [ 2025-09-06 00:40:07.777112 | orchestrator |  { 2025-09-06 00:40:07.777123 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-06 00:40:07.777134 | orchestrator |  "vg_name": "ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567" 2025-09-06 00:40:07.777144 | orchestrator |  }, 2025-09-06 00:40:07.777155 | orchestrator |  { 2025-09-06 00:40:07.777166 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-06 00:40:07.777177 | orchestrator |  "vg_name": "ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3" 2025-09-06 00:40:07.777187 | orchestrator |  } 2025-09-06 00:40:07.777198 | orchestrator |  ] 2025-09-06 00:40:07.777209 | orchestrator |  } 2025-09-06 00:40:07.777219 | orchestrator | } 2025-09-06 00:40:07.777230 | orchestrator | 2025-09-06 00:40:07.777241 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-06 00:40:07.777251 | orchestrator | 2025-09-06 00:40:07.777262 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:40:07.777273 | orchestrator | Saturday 06 September 2025 00:40:05 +0000 (0:00:00.277) 0:00:24.059 **** 2025-09-06 00:40:07.777284 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-06 00:40:07.777303 | orchestrator | 2025-09-06 00:40:07.777314 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:40:07.777325 | orchestrator | Saturday 06 September 2025 00:40:05 +0000 (0:00:00.257) 0:00:24.317 **** 2025-09-06 00:40:07.777336 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:07.777346 | orchestrator | 2025-09-06 00:40:07.777357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777368 | orchestrator | Saturday 06 September 2025 00:40:05 +0000 (0:00:00.227) 0:00:24.545 **** 2025-09-06 00:40:07.777396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-06 00:40:07.777407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-06 00:40:07.777418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-06 00:40:07.777428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-06 00:40:07.777439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-06 00:40:07.777450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-06 00:40:07.777460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-06 00:40:07.777476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-06 00:40:07.777487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-06 00:40:07.777498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-06 00:40:07.777508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-06 00:40:07.777519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-06 00:40:07.777530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-06 00:40:07.777541 | orchestrator | 2025-09-06 00:40:07.777551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777562 | orchestrator | Saturday 06 September 2025 00:40:06 +0000 (0:00:00.350) 0:00:24.895 **** 2025-09-06 00:40:07.777573 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777583 | orchestrator | 2025-09-06 00:40:07.777594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777605 | orchestrator | Saturday 06 September 2025 00:40:06 +0000 (0:00:00.182) 0:00:25.077 **** 2025-09-06 00:40:07.777615 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777626 | orchestrator | 2025-09-06 00:40:07.777637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777647 | orchestrator | Saturday 06 September 2025 00:40:06 +0000 (0:00:00.192) 0:00:25.270 **** 2025-09-06 00:40:07.777658 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777669 | orchestrator | 2025-09-06 00:40:07.777679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777690 | orchestrator | Saturday 06 September 2025 00:40:06 +0000 (0:00:00.514) 0:00:25.785 **** 2025-09-06 00:40:07.777701 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777711 | orchestrator | 2025-09-06 00:40:07.777722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777732 | orchestrator | Saturday 06 September 2025 00:40:07 +0000 (0:00:00.201) 0:00:25.987 **** 2025-09-06 00:40:07.777743 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777754 | orchestrator | 2025-09-06 00:40:07.777764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777775 | orchestrator | Saturday 06 September 2025 00:40:07 +0000 (0:00:00.201) 0:00:26.189 **** 2025-09-06 00:40:07.777786 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777796 | orchestrator | 2025-09-06 00:40:07.777815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:07.777826 | orchestrator | Saturday 06 September 2025 00:40:07 +0000 (0:00:00.235) 0:00:26.425 **** 2025-09-06 00:40:07.777837 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:07.777848 | orchestrator | 2025-09-06 00:40:07.777866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715183 | orchestrator | Saturday 06 September 2025 00:40:07 +0000 (0:00:00.240) 0:00:26.665 **** 2025-09-06 00:40:17.715319 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.715337 | orchestrator | 2025-09-06 00:40:17.715350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715362 | orchestrator | Saturday 06 September 2025 00:40:07 +0000 (0:00:00.183) 0:00:26.849 **** 2025-09-06 00:40:17.715373 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1) 2025-09-06 00:40:17.715385 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1) 2025-09-06 00:40:17.715395 | orchestrator | 2025-09-06 00:40:17.715406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715417 | orchestrator | Saturday 06 September 2025 00:40:08 +0000 (0:00:00.331) 0:00:27.180 **** 2025-09-06 00:40:17.715427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74) 2025-09-06 00:40:17.715438 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74) 2025-09-06 00:40:17.715449 | orchestrator | 2025-09-06 00:40:17.715459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715470 | orchestrator | Saturday 06 September 2025 00:40:08 +0000 (0:00:00.405) 0:00:27.585 **** 2025-09-06 00:40:17.715481 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba) 2025-09-06 00:40:17.715491 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba) 2025-09-06 00:40:17.715502 | orchestrator | 2025-09-06 00:40:17.715513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715524 | orchestrator | Saturday 06 September 2025 00:40:09 +0000 (0:00:00.412) 0:00:27.998 **** 2025-09-06 00:40:17.715534 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7) 2025-09-06 00:40:17.715545 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7) 2025-09-06 00:40:17.715556 | orchestrator | 2025-09-06 00:40:17.715566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:17.715578 | orchestrator | Saturday 06 September 2025 00:40:09 +0000 (0:00:00.405) 0:00:28.404 **** 2025-09-06 00:40:17.715591 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:40:17.715603 | orchestrator | 2025-09-06 00:40:17.715616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.715629 | orchestrator | Saturday 06 September 2025 00:40:09 +0000 (0:00:00.312) 0:00:28.716 **** 2025-09-06 00:40:17.715642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-06 00:40:17.715670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-06 00:40:17.715683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-06 00:40:17.715696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-06 00:40:17.715708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-06 00:40:17.715721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-06 00:40:17.715733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-06 00:40:17.715769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-06 00:40:17.715782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-06 00:40:17.715794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-06 00:40:17.715806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-06 00:40:17.715818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-06 00:40:17.715831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-06 00:40:17.715843 | orchestrator | 2025-09-06 00:40:17.715856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.715869 | orchestrator | Saturday 06 September 2025 00:40:10 +0000 (0:00:00.519) 0:00:29.236 **** 2025-09-06 00:40:17.715882 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.715894 | orchestrator | 2025-09-06 00:40:17.715908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.715922 | orchestrator | Saturday 06 September 2025 00:40:10 +0000 (0:00:00.188) 0:00:29.425 **** 2025-09-06 00:40:17.715963 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.715976 | orchestrator | 2025-09-06 00:40:17.715987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.715998 | orchestrator | Saturday 06 September 2025 00:40:10 +0000 (0:00:00.192) 0:00:29.618 **** 2025-09-06 00:40:17.716008 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716019 | orchestrator | 2025-09-06 00:40:17.716029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716040 | orchestrator | Saturday 06 September 2025 00:40:10 +0000 (0:00:00.199) 0:00:29.818 **** 2025-09-06 00:40:17.716051 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716062 | orchestrator | 2025-09-06 00:40:17.716089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716101 | orchestrator | Saturday 06 September 2025 00:40:11 +0000 (0:00:00.203) 0:00:30.021 **** 2025-09-06 00:40:17.716111 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716122 | orchestrator | 2025-09-06 00:40:17.716133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716143 | orchestrator | Saturday 06 September 2025 00:40:11 +0000 (0:00:00.219) 0:00:30.241 **** 2025-09-06 00:40:17.716154 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716164 | orchestrator | 2025-09-06 00:40:17.716175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716186 | orchestrator | Saturday 06 September 2025 00:40:11 +0000 (0:00:00.190) 0:00:30.432 **** 2025-09-06 00:40:17.716197 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716207 | orchestrator | 2025-09-06 00:40:17.716218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716229 | orchestrator | Saturday 06 September 2025 00:40:11 +0000 (0:00:00.189) 0:00:30.621 **** 2025-09-06 00:40:17.716239 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716250 | orchestrator | 2025-09-06 00:40:17.716261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716271 | orchestrator | Saturday 06 September 2025 00:40:11 +0000 (0:00:00.207) 0:00:30.828 **** 2025-09-06 00:40:17.716282 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-06 00:40:17.716293 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-06 00:40:17.716303 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-06 00:40:17.716314 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-06 00:40:17.716324 | orchestrator | 2025-09-06 00:40:17.716335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716346 | orchestrator | Saturday 06 September 2025 00:40:12 +0000 (0:00:00.835) 0:00:31.663 **** 2025-09-06 00:40:17.716365 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716376 | orchestrator | 2025-09-06 00:40:17.716387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716398 | orchestrator | Saturday 06 September 2025 00:40:12 +0000 (0:00:00.201) 0:00:31.864 **** 2025-09-06 00:40:17.716408 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716419 | orchestrator | 2025-09-06 00:40:17.716430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716440 | orchestrator | Saturday 06 September 2025 00:40:13 +0000 (0:00:00.192) 0:00:32.057 **** 2025-09-06 00:40:17.716451 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716462 | orchestrator | 2025-09-06 00:40:17.716472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:17.716483 | orchestrator | Saturday 06 September 2025 00:40:13 +0000 (0:00:00.587) 0:00:32.644 **** 2025-09-06 00:40:17.716493 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716504 | orchestrator | 2025-09-06 00:40:17.716515 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-06 00:40:17.716525 | orchestrator | Saturday 06 September 2025 00:40:14 +0000 (0:00:00.272) 0:00:32.917 **** 2025-09-06 00:40:17.716536 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716547 | orchestrator | 2025-09-06 00:40:17.716557 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-06 00:40:17.716568 | orchestrator | Saturday 06 September 2025 00:40:14 +0000 (0:00:00.146) 0:00:33.064 **** 2025-09-06 00:40:17.716579 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9969153-fa79-5368-8c16-a33775dfe5f6'}}) 2025-09-06 00:40:17.716590 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}}) 2025-09-06 00:40:17.716600 | orchestrator | 2025-09-06 00:40:17.716611 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-06 00:40:17.716621 | orchestrator | Saturday 06 September 2025 00:40:14 +0000 (0:00:00.207) 0:00:33.272 **** 2025-09-06 00:40:17.716633 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'}) 2025-09-06 00:40:17.716644 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}) 2025-09-06 00:40:17.716655 | orchestrator | 2025-09-06 00:40:17.716666 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-06 00:40:17.716677 | orchestrator | Saturday 06 September 2025 00:40:16 +0000 (0:00:01.870) 0:00:35.143 **** 2025-09-06 00:40:17.716687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:17.716699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:17.716710 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:17.716720 | orchestrator | 2025-09-06 00:40:17.716731 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-06 00:40:17.716742 | orchestrator | Saturday 06 September 2025 00:40:16 +0000 (0:00:00.174) 0:00:35.317 **** 2025-09-06 00:40:17.716753 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'}) 2025-09-06 00:40:17.716763 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}) 2025-09-06 00:40:17.716774 | orchestrator | 2025-09-06 00:40:17.716791 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-06 00:40:22.716806 | orchestrator | Saturday 06 September 2025 00:40:17 +0000 (0:00:01.284) 0:00:36.601 **** 2025-09-06 00:40:22.716918 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.716977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.716990 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717001 | orchestrator | 2025-09-06 00:40:22.717013 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-06 00:40:22.717024 | orchestrator | Saturday 06 September 2025 00:40:17 +0000 (0:00:00.146) 0:00:36.748 **** 2025-09-06 00:40:22.717035 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717046 | orchestrator | 2025-09-06 00:40:22.717056 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-06 00:40:22.717068 | orchestrator | Saturday 06 September 2025 00:40:17 +0000 (0:00:00.128) 0:00:36.877 **** 2025-09-06 00:40:22.717079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717116 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717127 | orchestrator | 2025-09-06 00:40:22.717138 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-06 00:40:22.717148 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.148) 0:00:37.026 **** 2025-09-06 00:40:22.717159 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717170 | orchestrator | 2025-09-06 00:40:22.717181 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-06 00:40:22.717191 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.124) 0:00:37.151 **** 2025-09-06 00:40:22.717202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717224 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717235 | orchestrator | 2025-09-06 00:40:22.717246 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-06 00:40:22.717257 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.147) 0:00:37.298 **** 2025-09-06 00:40:22.717274 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717285 | orchestrator | 2025-09-06 00:40:22.717296 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-06 00:40:22.717306 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.241) 0:00:37.539 **** 2025-09-06 00:40:22.717317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717339 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717350 | orchestrator | 2025-09-06 00:40:22.717363 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-06 00:40:22.717375 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.142) 0:00:37.682 **** 2025-09-06 00:40:22.717388 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:22.717401 | orchestrator | 2025-09-06 00:40:22.717413 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-06 00:40:22.717426 | orchestrator | Saturday 06 September 2025 00:40:18 +0000 (0:00:00.118) 0:00:37.801 **** 2025-09-06 00:40:22.717446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717473 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717485 | orchestrator | 2025-09-06 00:40:22.717497 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-06 00:40:22.717510 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.164) 0:00:37.966 **** 2025-09-06 00:40:22.717522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717547 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717559 | orchestrator | 2025-09-06 00:40:22.717572 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-06 00:40:22.717585 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.117) 0:00:38.083 **** 2025-09-06 00:40:22.717613 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:22.717626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:22.717639 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717651 | orchestrator | 2025-09-06 00:40:22.717664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-06 00:40:22.717676 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.125) 0:00:38.208 **** 2025-09-06 00:40:22.717688 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717701 | orchestrator | 2025-09-06 00:40:22.717714 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-06 00:40:22.717725 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.127) 0:00:38.336 **** 2025-09-06 00:40:22.717736 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717747 | orchestrator | 2025-09-06 00:40:22.717757 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-06 00:40:22.717768 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.116) 0:00:38.453 **** 2025-09-06 00:40:22.717778 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.717789 | orchestrator | 2025-09-06 00:40:22.717799 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-06 00:40:22.717810 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.113) 0:00:38.566 **** 2025-09-06 00:40:22.717820 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:40:22.717831 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-06 00:40:22.717842 | orchestrator | } 2025-09-06 00:40:22.717852 | orchestrator | 2025-09-06 00:40:22.717863 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-06 00:40:22.717873 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.121) 0:00:38.688 **** 2025-09-06 00:40:22.717884 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:40:22.717894 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-06 00:40:22.717905 | orchestrator | } 2025-09-06 00:40:22.717915 | orchestrator | 2025-09-06 00:40:22.717926 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-06 00:40:22.717951 | orchestrator | Saturday 06 September 2025 00:40:19 +0000 (0:00:00.129) 0:00:38.817 **** 2025-09-06 00:40:22.717962 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:40:22.717973 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-06 00:40:22.717990 | orchestrator | } 2025-09-06 00:40:22.718001 | orchestrator | 2025-09-06 00:40:22.718011 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-06 00:40:22.718069 | orchestrator | Saturday 06 September 2025 00:40:20 +0000 (0:00:00.132) 0:00:38.949 **** 2025-09-06 00:40:22.718080 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:22.718091 | orchestrator | 2025-09-06 00:40:22.718102 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-06 00:40:22.718113 | orchestrator | Saturday 06 September 2025 00:40:20 +0000 (0:00:00.661) 0:00:39.611 **** 2025-09-06 00:40:22.718129 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:22.718140 | orchestrator | 2025-09-06 00:40:22.718151 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-06 00:40:22.718161 | orchestrator | Saturday 06 September 2025 00:40:21 +0000 (0:00:00.518) 0:00:40.129 **** 2025-09-06 00:40:22.718172 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:22.718183 | orchestrator | 2025-09-06 00:40:22.718193 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-06 00:40:22.718204 | orchestrator | Saturday 06 September 2025 00:40:21 +0000 (0:00:00.502) 0:00:40.632 **** 2025-09-06 00:40:22.718215 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:22.718225 | orchestrator | 2025-09-06 00:40:22.718236 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-06 00:40:22.718247 | orchestrator | Saturday 06 September 2025 00:40:21 +0000 (0:00:00.138) 0:00:40.770 **** 2025-09-06 00:40:22.718257 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718268 | orchestrator | 2025-09-06 00:40:22.718278 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-06 00:40:22.718289 | orchestrator | Saturday 06 September 2025 00:40:21 +0000 (0:00:00.100) 0:00:40.870 **** 2025-09-06 00:40:22.718300 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718310 | orchestrator | 2025-09-06 00:40:22.718321 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-06 00:40:22.718331 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.097) 0:00:40.968 **** 2025-09-06 00:40:22.718342 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:40:22.718353 | orchestrator |  "vgs_report": { 2025-09-06 00:40:22.718364 | orchestrator |  "vg": [] 2025-09-06 00:40:22.718375 | orchestrator |  } 2025-09-06 00:40:22.718386 | orchestrator | } 2025-09-06 00:40:22.718397 | orchestrator | 2025-09-06 00:40:22.718407 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-06 00:40:22.718418 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.123) 0:00:41.092 **** 2025-09-06 00:40:22.718429 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718439 | orchestrator | 2025-09-06 00:40:22.718450 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-06 00:40:22.718461 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.124) 0:00:41.216 **** 2025-09-06 00:40:22.718471 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718482 | orchestrator | 2025-09-06 00:40:22.718492 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-06 00:40:22.718503 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.132) 0:00:41.348 **** 2025-09-06 00:40:22.718514 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718524 | orchestrator | 2025-09-06 00:40:22.718535 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-06 00:40:22.718546 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.125) 0:00:41.474 **** 2025-09-06 00:40:22.718556 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:22.718567 | orchestrator | 2025-09-06 00:40:22.718578 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-06 00:40:22.718596 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.128) 0:00:41.603 **** 2025-09-06 00:40:26.802858 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.802990 | orchestrator | 2025-09-06 00:40:26.803028 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-06 00:40:26.803042 | orchestrator | Saturday 06 September 2025 00:40:22 +0000 (0:00:00.124) 0:00:41.727 **** 2025-09-06 00:40:26.803053 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803064 | orchestrator | 2025-09-06 00:40:26.803074 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-06 00:40:26.803085 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.226) 0:00:41.953 **** 2025-09-06 00:40:26.803096 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803107 | orchestrator | 2025-09-06 00:40:26.803118 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-06 00:40:26.803128 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.128) 0:00:42.082 **** 2025-09-06 00:40:26.803139 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803150 | orchestrator | 2025-09-06 00:40:26.803161 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-06 00:40:26.803171 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.124) 0:00:42.207 **** 2025-09-06 00:40:26.803182 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803193 | orchestrator | 2025-09-06 00:40:26.803203 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-06 00:40:26.803214 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.125) 0:00:42.333 **** 2025-09-06 00:40:26.803225 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803236 | orchestrator | 2025-09-06 00:40:26.803246 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-06 00:40:26.803257 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.139) 0:00:42.472 **** 2025-09-06 00:40:26.803268 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803279 | orchestrator | 2025-09-06 00:40:26.803289 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-06 00:40:26.803300 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.119) 0:00:42.592 **** 2025-09-06 00:40:26.803311 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803322 | orchestrator | 2025-09-06 00:40:26.803332 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-06 00:40:26.803343 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.108) 0:00:42.700 **** 2025-09-06 00:40:26.803354 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803365 | orchestrator | 2025-09-06 00:40:26.803375 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-06 00:40:26.803386 | orchestrator | Saturday 06 September 2025 00:40:23 +0000 (0:00:00.117) 0:00:42.817 **** 2025-09-06 00:40:26.803397 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803410 | orchestrator | 2025-09-06 00:40:26.803423 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-06 00:40:26.803436 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.119) 0:00:42.937 **** 2025-09-06 00:40:26.803463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803492 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803505 | orchestrator | 2025-09-06 00:40:26.803518 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-06 00:40:26.803531 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.125) 0:00:43.062 **** 2025-09-06 00:40:26.803545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803578 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803591 | orchestrator | 2025-09-06 00:40:26.803604 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-06 00:40:26.803617 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.118) 0:00:43.181 **** 2025-09-06 00:40:26.803630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803655 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803666 | orchestrator | 2025-09-06 00:40:26.803677 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-06 00:40:26.803688 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.102) 0:00:43.284 **** 2025-09-06 00:40:26.803699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803721 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803732 | orchestrator | 2025-09-06 00:40:26.803742 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-06 00:40:26.803769 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.222) 0:00:43.507 **** 2025-09-06 00:40:26.803781 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803803 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803814 | orchestrator | 2025-09-06 00:40:26.803825 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-06 00:40:26.803835 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.148) 0:00:43.656 **** 2025-09-06 00:40:26.803846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803868 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803879 | orchestrator | 2025-09-06 00:40:26.803890 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-06 00:40:26.803901 | orchestrator | Saturday 06 September 2025 00:40:24 +0000 (0:00:00.152) 0:00:43.808 **** 2025-09-06 00:40:26.803912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.803946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.803959 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.803969 | orchestrator | 2025-09-06 00:40:26.803980 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-06 00:40:26.803991 | orchestrator | Saturday 06 September 2025 00:40:25 +0000 (0:00:00.142) 0:00:43.950 **** 2025-09-06 00:40:26.804002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.804019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.804030 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.804041 | orchestrator | 2025-09-06 00:40:26.804052 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-06 00:40:26.804095 | orchestrator | Saturday 06 September 2025 00:40:25 +0000 (0:00:00.142) 0:00:44.093 **** 2025-09-06 00:40:26.804108 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:26.804119 | orchestrator | 2025-09-06 00:40:26.804130 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-06 00:40:26.804141 | orchestrator | Saturday 06 September 2025 00:40:25 +0000 (0:00:00.495) 0:00:44.589 **** 2025-09-06 00:40:26.804151 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:26.804162 | orchestrator | 2025-09-06 00:40:26.804173 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-06 00:40:26.804184 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.483) 0:00:45.072 **** 2025-09-06 00:40:26.804195 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:40:26.804206 | orchestrator | 2025-09-06 00:40:26.804217 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-06 00:40:26.804228 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.148) 0:00:45.221 **** 2025-09-06 00:40:26.804239 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'vg_name': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}) 2025-09-06 00:40:26.804251 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'vg_name': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'}) 2025-09-06 00:40:26.804261 | orchestrator | 2025-09-06 00:40:26.804272 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-06 00:40:26.804283 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.165) 0:00:45.386 **** 2025-09-06 00:40:26.804294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.804305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.804316 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:26.804327 | orchestrator | 2025-09-06 00:40:26.804337 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-06 00:40:26.804348 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.146) 0:00:45.533 **** 2025-09-06 00:40:26.804359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:26.804370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:26.804388 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:32.362577 | orchestrator | 2025-09-06 00:40:32.362692 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-06 00:40:32.362708 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.155) 0:00:45.688 **** 2025-09-06 00:40:32.362720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'})  2025-09-06 00:40:32.362731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'})  2025-09-06 00:40:32.362741 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:40:32.362752 | orchestrator | 2025-09-06 00:40:32.362762 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-06 00:40:32.362771 | orchestrator | Saturday 06 September 2025 00:40:26 +0000 (0:00:00.149) 0:00:45.838 **** 2025-09-06 00:40:32.362806 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 00:40:32.362816 | orchestrator |  "lvm_report": { 2025-09-06 00:40:32.362828 | orchestrator |  "lv": [ 2025-09-06 00:40:32.362838 | orchestrator |  { 2025-09-06 00:40:32.362848 | orchestrator |  "lv_name": "osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0", 2025-09-06 00:40:32.362859 | orchestrator |  "vg_name": "ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0" 2025-09-06 00:40:32.362868 | orchestrator |  }, 2025-09-06 00:40:32.362878 | orchestrator |  { 2025-09-06 00:40:32.362887 | orchestrator |  "lv_name": "osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6", 2025-09-06 00:40:32.362897 | orchestrator |  "vg_name": "ceph-e9969153-fa79-5368-8c16-a33775dfe5f6" 2025-09-06 00:40:32.362906 | orchestrator |  } 2025-09-06 00:40:32.362916 | orchestrator |  ], 2025-09-06 00:40:32.362980 | orchestrator |  "pv": [ 2025-09-06 00:40:32.362989 | orchestrator |  { 2025-09-06 00:40:32.362999 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-06 00:40:32.363008 | orchestrator |  "vg_name": "ceph-e9969153-fa79-5368-8c16-a33775dfe5f6" 2025-09-06 00:40:32.363017 | orchestrator |  }, 2025-09-06 00:40:32.363027 | orchestrator |  { 2025-09-06 00:40:32.363036 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-06 00:40:32.363046 | orchestrator |  "vg_name": "ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0" 2025-09-06 00:40:32.363055 | orchestrator |  } 2025-09-06 00:40:32.363065 | orchestrator |  ] 2025-09-06 00:40:32.363074 | orchestrator |  } 2025-09-06 00:40:32.363084 | orchestrator | } 2025-09-06 00:40:32.363094 | orchestrator | 2025-09-06 00:40:32.363104 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-06 00:40:32.363116 | orchestrator | 2025-09-06 00:40:32.363127 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-06 00:40:32.363139 | orchestrator | Saturday 06 September 2025 00:40:27 +0000 (0:00:00.408) 0:00:46.246 **** 2025-09-06 00:40:32.363151 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-06 00:40:32.363162 | orchestrator | 2025-09-06 00:40:32.363188 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-06 00:40:32.363199 | orchestrator | Saturday 06 September 2025 00:40:27 +0000 (0:00:00.244) 0:00:46.491 **** 2025-09-06 00:40:32.363211 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:32.363223 | orchestrator | 2025-09-06 00:40:32.363234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363245 | orchestrator | Saturday 06 September 2025 00:40:27 +0000 (0:00:00.212) 0:00:46.703 **** 2025-09-06 00:40:32.363257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-06 00:40:32.363267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-06 00:40:32.363278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-06 00:40:32.363288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-06 00:40:32.363299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-06 00:40:32.363311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-06 00:40:32.363322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-06 00:40:32.363333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-06 00:40:32.363344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-06 00:40:32.363355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-06 00:40:32.363366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-06 00:40:32.363388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-06 00:40:32.363399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-06 00:40:32.363411 | orchestrator | 2025-09-06 00:40:32.363422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363433 | orchestrator | Saturday 06 September 2025 00:40:28 +0000 (0:00:00.386) 0:00:47.089 **** 2025-09-06 00:40:32.363445 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363459 | orchestrator | 2025-09-06 00:40:32.363468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363478 | orchestrator | Saturday 06 September 2025 00:40:28 +0000 (0:00:00.154) 0:00:47.244 **** 2025-09-06 00:40:32.363487 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363497 | orchestrator | 2025-09-06 00:40:32.363506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363532 | orchestrator | Saturday 06 September 2025 00:40:28 +0000 (0:00:00.204) 0:00:47.449 **** 2025-09-06 00:40:32.363543 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363552 | orchestrator | 2025-09-06 00:40:32.363562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363571 | orchestrator | Saturday 06 September 2025 00:40:28 +0000 (0:00:00.181) 0:00:47.630 **** 2025-09-06 00:40:32.363581 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363590 | orchestrator | 2025-09-06 00:40:32.363600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363609 | orchestrator | Saturday 06 September 2025 00:40:28 +0000 (0:00:00.182) 0:00:47.813 **** 2025-09-06 00:40:32.363619 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363628 | orchestrator | 2025-09-06 00:40:32.363638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363648 | orchestrator | Saturday 06 September 2025 00:40:29 +0000 (0:00:00.183) 0:00:47.996 **** 2025-09-06 00:40:32.363657 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363667 | orchestrator | 2025-09-06 00:40:32.363676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363686 | orchestrator | Saturday 06 September 2025 00:40:29 +0000 (0:00:00.421) 0:00:48.418 **** 2025-09-06 00:40:32.363695 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363705 | orchestrator | 2025-09-06 00:40:32.363714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363724 | orchestrator | Saturday 06 September 2025 00:40:29 +0000 (0:00:00.192) 0:00:48.610 **** 2025-09-06 00:40:32.363733 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:32.363743 | orchestrator | 2025-09-06 00:40:32.363753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363762 | orchestrator | Saturday 06 September 2025 00:40:29 +0000 (0:00:00.187) 0:00:48.798 **** 2025-09-06 00:40:32.363772 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626) 2025-09-06 00:40:32.363783 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626) 2025-09-06 00:40:32.363792 | orchestrator | 2025-09-06 00:40:32.363802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363811 | orchestrator | Saturday 06 September 2025 00:40:30 +0000 (0:00:00.440) 0:00:49.238 **** 2025-09-06 00:40:32.363821 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b) 2025-09-06 00:40:32.363830 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b) 2025-09-06 00:40:32.363840 | orchestrator | 2025-09-06 00:40:32.363849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363859 | orchestrator | Saturday 06 September 2025 00:40:30 +0000 (0:00:00.385) 0:00:49.623 **** 2025-09-06 00:40:32.363880 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4) 2025-09-06 00:40:32.363890 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4) 2025-09-06 00:40:32.363899 | orchestrator | 2025-09-06 00:40:32.363909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363936 | orchestrator | Saturday 06 September 2025 00:40:31 +0000 (0:00:00.431) 0:00:50.055 **** 2025-09-06 00:40:32.363946 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634) 2025-09-06 00:40:32.363955 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634) 2025-09-06 00:40:32.363965 | orchestrator | 2025-09-06 00:40:32.363974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-06 00:40:32.363984 | orchestrator | Saturday 06 September 2025 00:40:31 +0000 (0:00:00.446) 0:00:50.501 **** 2025-09-06 00:40:32.363993 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-06 00:40:32.364003 | orchestrator | 2025-09-06 00:40:32.364012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:32.364022 | orchestrator | Saturday 06 September 2025 00:40:31 +0000 (0:00:00.332) 0:00:50.834 **** 2025-09-06 00:40:32.364031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-06 00:40:32.364041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-06 00:40:32.364050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-06 00:40:32.364060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-06 00:40:32.364069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-06 00:40:32.364079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-06 00:40:32.364088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-06 00:40:32.364097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-06 00:40:32.364107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-06 00:40:32.364117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-06 00:40:32.364126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-06 00:40:32.364142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-06 00:40:41.451825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-06 00:40:41.451991 | orchestrator | 2025-09-06 00:40:41.452011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452024 | orchestrator | Saturday 06 September 2025 00:40:32 +0000 (0:00:00.406) 0:00:51.241 **** 2025-09-06 00:40:41.452035 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452047 | orchestrator | 2025-09-06 00:40:41.452058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452068 | orchestrator | Saturday 06 September 2025 00:40:32 +0000 (0:00:00.210) 0:00:51.451 **** 2025-09-06 00:40:41.452079 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452090 | orchestrator | 2025-09-06 00:40:41.452101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452111 | orchestrator | Saturday 06 September 2025 00:40:32 +0000 (0:00:00.205) 0:00:51.656 **** 2025-09-06 00:40:41.452122 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452133 | orchestrator | 2025-09-06 00:40:41.452143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452180 | orchestrator | Saturday 06 September 2025 00:40:33 +0000 (0:00:00.657) 0:00:52.313 **** 2025-09-06 00:40:41.452191 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452202 | orchestrator | 2025-09-06 00:40:41.452213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452224 | orchestrator | Saturday 06 September 2025 00:40:33 +0000 (0:00:00.205) 0:00:52.519 **** 2025-09-06 00:40:41.452234 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452245 | orchestrator | 2025-09-06 00:40:41.452256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452266 | orchestrator | Saturday 06 September 2025 00:40:33 +0000 (0:00:00.218) 0:00:52.737 **** 2025-09-06 00:40:41.452277 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452288 | orchestrator | 2025-09-06 00:40:41.452298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452309 | orchestrator | Saturday 06 September 2025 00:40:34 +0000 (0:00:00.214) 0:00:52.951 **** 2025-09-06 00:40:41.452320 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452330 | orchestrator | 2025-09-06 00:40:41.452341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452354 | orchestrator | Saturday 06 September 2025 00:40:34 +0000 (0:00:00.291) 0:00:53.242 **** 2025-09-06 00:40:41.452366 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452378 | orchestrator | 2025-09-06 00:40:41.452391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452404 | orchestrator | Saturday 06 September 2025 00:40:34 +0000 (0:00:00.213) 0:00:53.456 **** 2025-09-06 00:40:41.452416 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-06 00:40:41.452429 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-06 00:40:41.452442 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-06 00:40:41.452455 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-06 00:40:41.452467 | orchestrator | 2025-09-06 00:40:41.452479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452492 | orchestrator | Saturday 06 September 2025 00:40:35 +0000 (0:00:00.665) 0:00:54.121 **** 2025-09-06 00:40:41.452504 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452516 | orchestrator | 2025-09-06 00:40:41.452528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452541 | orchestrator | Saturday 06 September 2025 00:40:35 +0000 (0:00:00.224) 0:00:54.346 **** 2025-09-06 00:40:41.452553 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452566 | orchestrator | 2025-09-06 00:40:41.452579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452592 | orchestrator | Saturday 06 September 2025 00:40:35 +0000 (0:00:00.212) 0:00:54.558 **** 2025-09-06 00:40:41.452604 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452617 | orchestrator | 2025-09-06 00:40:41.452629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-06 00:40:41.452641 | orchestrator | Saturday 06 September 2025 00:40:35 +0000 (0:00:00.184) 0:00:54.743 **** 2025-09-06 00:40:41.452653 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452665 | orchestrator | 2025-09-06 00:40:41.452678 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-06 00:40:41.452690 | orchestrator | Saturday 06 September 2025 00:40:36 +0000 (0:00:00.199) 0:00:54.942 **** 2025-09-06 00:40:41.452703 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452715 | orchestrator | 2025-09-06 00:40:41.452727 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-06 00:40:41.452738 | orchestrator | Saturday 06 September 2025 00:40:36 +0000 (0:00:00.306) 0:00:55.248 **** 2025-09-06 00:40:41.452749 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}}) 2025-09-06 00:40:41.452760 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd801673f-a74f-56ad-ad0d-e97588ff4709'}}) 2025-09-06 00:40:41.452778 | orchestrator | 2025-09-06 00:40:41.452789 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-06 00:40:41.452799 | orchestrator | Saturday 06 September 2025 00:40:36 +0000 (0:00:00.212) 0:00:55.461 **** 2025-09-06 00:40:41.452811 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}) 2025-09-06 00:40:41.452823 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'}) 2025-09-06 00:40:41.452834 | orchestrator | 2025-09-06 00:40:41.452845 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-06 00:40:41.452874 | orchestrator | Saturday 06 September 2025 00:40:38 +0000 (0:00:01.871) 0:00:57.333 **** 2025-09-06 00:40:41.452886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:41.452898 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:41.452932 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.452943 | orchestrator | 2025-09-06 00:40:41.452954 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-06 00:40:41.452965 | orchestrator | Saturday 06 September 2025 00:40:38 +0000 (0:00:00.151) 0:00:57.485 **** 2025-09-06 00:40:41.452976 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}) 2025-09-06 00:40:41.453008 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'}) 2025-09-06 00:40:41.453021 | orchestrator | 2025-09-06 00:40:41.453032 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-06 00:40:41.453042 | orchestrator | Saturday 06 September 2025 00:40:39 +0000 (0:00:01.319) 0:00:58.805 **** 2025-09-06 00:40:41.453053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:41.453064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:41.453075 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453086 | orchestrator | 2025-09-06 00:40:41.453096 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-06 00:40:41.453107 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.141) 0:00:58.946 **** 2025-09-06 00:40:41.453118 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453128 | orchestrator | 2025-09-06 00:40:41.453139 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-06 00:40:41.453149 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.154) 0:00:59.100 **** 2025-09-06 00:40:41.453160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:41.453176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:41.453187 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453198 | orchestrator | 2025-09-06 00:40:41.453208 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-06 00:40:41.453219 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.155) 0:00:59.256 **** 2025-09-06 00:40:41.453230 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453249 | orchestrator | 2025-09-06 00:40:41.453260 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-06 00:40:41.453270 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.132) 0:00:59.389 **** 2025-09-06 00:40:41.453281 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:41.453292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:41.453303 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453313 | orchestrator | 2025-09-06 00:40:41.453324 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-06 00:40:41.453335 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.138) 0:00:59.527 **** 2025-09-06 00:40:41.453345 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453356 | orchestrator | 2025-09-06 00:40:41.453366 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-06 00:40:41.453377 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.131) 0:00:59.659 **** 2025-09-06 00:40:41.453388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:41.453399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:41.453410 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:41.453420 | orchestrator | 2025-09-06 00:40:41.453431 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-06 00:40:41.453441 | orchestrator | Saturday 06 September 2025 00:40:40 +0000 (0:00:00.149) 0:00:59.808 **** 2025-09-06 00:40:41.453452 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:41.453463 | orchestrator | 2025-09-06 00:40:41.453473 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-06 00:40:41.453484 | orchestrator | Saturday 06 September 2025 00:40:41 +0000 (0:00:00.369) 0:01:00.177 **** 2025-09-06 00:40:41.453504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:47.927759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:47.927863 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.927875 | orchestrator | 2025-09-06 00:40:47.927885 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-06 00:40:47.927896 | orchestrator | Saturday 06 September 2025 00:40:41 +0000 (0:00:00.163) 0:01:00.341 **** 2025-09-06 00:40:47.927947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:47.927958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:47.927967 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.927976 | orchestrator | 2025-09-06 00:40:47.927985 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-06 00:40:47.927994 | orchestrator | Saturday 06 September 2025 00:40:41 +0000 (0:00:00.142) 0:01:00.483 **** 2025-09-06 00:40:47.928003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:47.928012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:47.928021 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928055 | orchestrator | 2025-09-06 00:40:47.928064 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-06 00:40:47.928073 | orchestrator | Saturday 06 September 2025 00:40:41 +0000 (0:00:00.148) 0:01:00.631 **** 2025-09-06 00:40:47.928081 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928090 | orchestrator | 2025-09-06 00:40:47.928098 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-06 00:40:47.928107 | orchestrator | Saturday 06 September 2025 00:40:41 +0000 (0:00:00.133) 0:01:00.765 **** 2025-09-06 00:40:47.928116 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928124 | orchestrator | 2025-09-06 00:40:47.928132 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-06 00:40:47.928141 | orchestrator | Saturday 06 September 2025 00:40:42 +0000 (0:00:00.144) 0:01:00.909 **** 2025-09-06 00:40:47.928149 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928158 | orchestrator | 2025-09-06 00:40:47.928166 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-06 00:40:47.928189 | orchestrator | Saturday 06 September 2025 00:40:42 +0000 (0:00:00.134) 0:01:01.044 **** 2025-09-06 00:40:47.928197 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:40:47.928206 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-06 00:40:47.928215 | orchestrator | } 2025-09-06 00:40:47.928224 | orchestrator | 2025-09-06 00:40:47.928232 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-06 00:40:47.928241 | orchestrator | Saturday 06 September 2025 00:40:42 +0000 (0:00:00.145) 0:01:01.189 **** 2025-09-06 00:40:47.928249 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:40:47.928258 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-06 00:40:47.928266 | orchestrator | } 2025-09-06 00:40:47.928275 | orchestrator | 2025-09-06 00:40:47.928284 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-06 00:40:47.928295 | orchestrator | Saturday 06 September 2025 00:40:42 +0000 (0:00:00.137) 0:01:01.326 **** 2025-09-06 00:40:47.928305 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:40:47.928315 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-06 00:40:47.928325 | orchestrator | } 2025-09-06 00:40:47.928336 | orchestrator | 2025-09-06 00:40:47.928346 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-06 00:40:47.928356 | orchestrator | Saturday 06 September 2025 00:40:42 +0000 (0:00:00.143) 0:01:01.470 **** 2025-09-06 00:40:47.928367 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:47.928376 | orchestrator | 2025-09-06 00:40:47.928387 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-06 00:40:47.928397 | orchestrator | Saturday 06 September 2025 00:40:43 +0000 (0:00:00.523) 0:01:01.994 **** 2025-09-06 00:40:47.928407 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:47.928416 | orchestrator | 2025-09-06 00:40:47.928426 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-06 00:40:47.928436 | orchestrator | Saturday 06 September 2025 00:40:43 +0000 (0:00:00.507) 0:01:02.501 **** 2025-09-06 00:40:47.928446 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:47.928456 | orchestrator | 2025-09-06 00:40:47.928466 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-06 00:40:47.928476 | orchestrator | Saturday 06 September 2025 00:40:44 +0000 (0:00:00.736) 0:01:03.238 **** 2025-09-06 00:40:47.928485 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:47.928495 | orchestrator | 2025-09-06 00:40:47.928505 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-06 00:40:47.928515 | orchestrator | Saturday 06 September 2025 00:40:44 +0000 (0:00:00.155) 0:01:03.394 **** 2025-09-06 00:40:47.928525 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928535 | orchestrator | 2025-09-06 00:40:47.928545 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-06 00:40:47.928555 | orchestrator | Saturday 06 September 2025 00:40:44 +0000 (0:00:00.122) 0:01:03.516 **** 2025-09-06 00:40:47.928571 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928581 | orchestrator | 2025-09-06 00:40:47.928591 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-06 00:40:47.928602 | orchestrator | Saturday 06 September 2025 00:40:44 +0000 (0:00:00.116) 0:01:03.632 **** 2025-09-06 00:40:47.928612 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:40:47.928622 | orchestrator |  "vgs_report": { 2025-09-06 00:40:47.928633 | orchestrator |  "vg": [] 2025-09-06 00:40:47.928659 | orchestrator |  } 2025-09-06 00:40:47.928669 | orchestrator | } 2025-09-06 00:40:47.928678 | orchestrator | 2025-09-06 00:40:47.928686 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-06 00:40:47.928695 | orchestrator | Saturday 06 September 2025 00:40:44 +0000 (0:00:00.136) 0:01:03.769 **** 2025-09-06 00:40:47.928704 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928712 | orchestrator | 2025-09-06 00:40:47.928721 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-06 00:40:47.928729 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.153) 0:01:03.922 **** 2025-09-06 00:40:47.928738 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928746 | orchestrator | 2025-09-06 00:40:47.928755 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-06 00:40:47.928763 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.176) 0:01:04.099 **** 2025-09-06 00:40:47.928772 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928780 | orchestrator | 2025-09-06 00:40:47.928789 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-06 00:40:47.928797 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.140) 0:01:04.239 **** 2025-09-06 00:40:47.928806 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928814 | orchestrator | 2025-09-06 00:40:47.928823 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-06 00:40:47.928831 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.214) 0:01:04.454 **** 2025-09-06 00:40:47.928840 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928848 | orchestrator | 2025-09-06 00:40:47.928857 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-06 00:40:47.928865 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.156) 0:01:04.611 **** 2025-09-06 00:40:47.928874 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928882 | orchestrator | 2025-09-06 00:40:47.928891 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-06 00:40:47.928899 | orchestrator | Saturday 06 September 2025 00:40:45 +0000 (0:00:00.153) 0:01:04.764 **** 2025-09-06 00:40:47.928926 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928934 | orchestrator | 2025-09-06 00:40:47.928943 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-06 00:40:47.928951 | orchestrator | Saturday 06 September 2025 00:40:46 +0000 (0:00:00.215) 0:01:04.979 **** 2025-09-06 00:40:47.928960 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.928968 | orchestrator | 2025-09-06 00:40:47.928977 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-06 00:40:47.928985 | orchestrator | Saturday 06 September 2025 00:40:46 +0000 (0:00:00.165) 0:01:05.145 **** 2025-09-06 00:40:47.928994 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929002 | orchestrator | 2025-09-06 00:40:47.929011 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-06 00:40:47.929023 | orchestrator | Saturday 06 September 2025 00:40:46 +0000 (0:00:00.416) 0:01:05.562 **** 2025-09-06 00:40:47.929032 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929040 | orchestrator | 2025-09-06 00:40:47.929049 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-06 00:40:47.929058 | orchestrator | Saturday 06 September 2025 00:40:46 +0000 (0:00:00.178) 0:01:05.740 **** 2025-09-06 00:40:47.929066 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929080 | orchestrator | 2025-09-06 00:40:47.929089 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-06 00:40:47.929097 | orchestrator | Saturday 06 September 2025 00:40:46 +0000 (0:00:00.138) 0:01:05.878 **** 2025-09-06 00:40:47.929106 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929114 | orchestrator | 2025-09-06 00:40:47.929123 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-06 00:40:47.929131 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.135) 0:01:06.014 **** 2025-09-06 00:40:47.929140 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929148 | orchestrator | 2025-09-06 00:40:47.929157 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-06 00:40:47.929165 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.141) 0:01:06.155 **** 2025-09-06 00:40:47.929174 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929182 | orchestrator | 2025-09-06 00:40:47.929191 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-06 00:40:47.929199 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.160) 0:01:06.315 **** 2025-09-06 00:40:47.929208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:47.929217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:47.929225 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929234 | orchestrator | 2025-09-06 00:40:47.929242 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-06 00:40:47.929251 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.169) 0:01:06.485 **** 2025-09-06 00:40:47.929259 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:47.929268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:47.929277 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:47.929285 | orchestrator | 2025-09-06 00:40:47.929294 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-06 00:40:47.929302 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.162) 0:01:06.647 **** 2025-09-06 00:40:47.929316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159557 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159572 | orchestrator | 2025-09-06 00:40:51.159584 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-06 00:40:51.159596 | orchestrator | Saturday 06 September 2025 00:40:47 +0000 (0:00:00.170) 0:01:06.817 **** 2025-09-06 00:40:51.159608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159630 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159641 | orchestrator | 2025-09-06 00:40:51.159652 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-06 00:40:51.159663 | orchestrator | Saturday 06 September 2025 00:40:48 +0000 (0:00:00.143) 0:01:06.961 **** 2025-09-06 00:40:51.159673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159723 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159734 | orchestrator | 2025-09-06 00:40:51.159745 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-06 00:40:51.159756 | orchestrator | Saturday 06 September 2025 00:40:48 +0000 (0:00:00.171) 0:01:07.133 **** 2025-09-06 00:40:51.159766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159788 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159799 | orchestrator | 2025-09-06 00:40:51.159810 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-06 00:40:51.159821 | orchestrator | Saturday 06 September 2025 00:40:48 +0000 (0:00:00.167) 0:01:07.300 **** 2025-09-06 00:40:51.159832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159854 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159864 | orchestrator | 2025-09-06 00:40:51.159875 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-06 00:40:51.159886 | orchestrator | Saturday 06 September 2025 00:40:48 +0000 (0:00:00.412) 0:01:07.713 **** 2025-09-06 00:40:51.159897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.159938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.159949 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.159960 | orchestrator | 2025-09-06 00:40:51.159970 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-06 00:40:51.159981 | orchestrator | Saturday 06 September 2025 00:40:48 +0000 (0:00:00.166) 0:01:07.879 **** 2025-09-06 00:40:51.159992 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:51.160004 | orchestrator | 2025-09-06 00:40:51.160015 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-06 00:40:51.160026 | orchestrator | Saturday 06 September 2025 00:40:49 +0000 (0:00:00.549) 0:01:08.429 **** 2025-09-06 00:40:51.160036 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:51.160047 | orchestrator | 2025-09-06 00:40:51.160057 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-06 00:40:51.160068 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.553) 0:01:08.982 **** 2025-09-06 00:40:51.160079 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:40:51.160090 | orchestrator | 2025-09-06 00:40:51.160100 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-06 00:40:51.160111 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.143) 0:01:09.126 **** 2025-09-06 00:40:51.160122 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'vg_name': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}) 2025-09-06 00:40:51.160134 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'vg_name': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'}) 2025-09-06 00:40:51.160145 | orchestrator | 2025-09-06 00:40:51.160155 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-06 00:40:51.160177 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.168) 0:01:09.294 **** 2025-09-06 00:40:51.160205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.160217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.160228 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.160239 | orchestrator | 2025-09-06 00:40:51.160250 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-06 00:40:51.160261 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.251) 0:01:09.546 **** 2025-09-06 00:40:51.160272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.160283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.160294 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.160305 | orchestrator | 2025-09-06 00:40:51.160316 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-06 00:40:51.160327 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.155) 0:01:09.702 **** 2025-09-06 00:40:51.160338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'})  2025-09-06 00:40:51.160367 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'})  2025-09-06 00:40:51.160379 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:40:51.160389 | orchestrator | 2025-09-06 00:40:51.160400 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-06 00:40:51.160411 | orchestrator | Saturday 06 September 2025 00:40:50 +0000 (0:00:00.169) 0:01:09.872 **** 2025-09-06 00:40:51.160422 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 00:40:51.160433 | orchestrator |  "lvm_report": { 2025-09-06 00:40:51.160445 | orchestrator |  "lv": [ 2025-09-06 00:40:51.160456 | orchestrator |  { 2025-09-06 00:40:51.160467 | orchestrator |  "lv_name": "osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f", 2025-09-06 00:40:51.160484 | orchestrator |  "vg_name": "ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f" 2025-09-06 00:40:51.160495 | orchestrator |  }, 2025-09-06 00:40:51.160506 | orchestrator |  { 2025-09-06 00:40:51.160517 | orchestrator |  "lv_name": "osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709", 2025-09-06 00:40:51.160528 | orchestrator |  "vg_name": "ceph-d801673f-a74f-56ad-ad0d-e97588ff4709" 2025-09-06 00:40:51.160539 | orchestrator |  } 2025-09-06 00:40:51.160550 | orchestrator |  ], 2025-09-06 00:40:51.160560 | orchestrator |  "pv": [ 2025-09-06 00:40:51.160571 | orchestrator |  { 2025-09-06 00:40:51.160582 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-06 00:40:51.160593 | orchestrator |  "vg_name": "ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f" 2025-09-06 00:40:51.160603 | orchestrator |  }, 2025-09-06 00:40:51.160614 | orchestrator |  { 2025-09-06 00:40:51.160625 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-06 00:40:51.160636 | orchestrator |  "vg_name": "ceph-d801673f-a74f-56ad-ad0d-e97588ff4709" 2025-09-06 00:40:51.160647 | orchestrator |  } 2025-09-06 00:40:51.160658 | orchestrator |  ] 2025-09-06 00:40:51.160669 | orchestrator |  } 2025-09-06 00:40:51.160680 | orchestrator | } 2025-09-06 00:40:51.160691 | orchestrator | 2025-09-06 00:40:51.160702 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:40:51.160721 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-06 00:40:51.160732 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-06 00:40:51.160743 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-06 00:40:51.160754 | orchestrator | 2025-09-06 00:40:51.160765 | orchestrator | 2025-09-06 00:40:51.160775 | orchestrator | 2025-09-06 00:40:51.160786 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:40:51.160797 | orchestrator | Saturday 06 September 2025 00:40:51 +0000 (0:00:00.155) 0:01:10.027 **** 2025-09-06 00:40:51.160808 | orchestrator | =============================================================================== 2025-09-06 00:40:51.160819 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2025-09-06 00:40:51.160830 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2025-09-06 00:40:51.160840 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2025-09-06 00:40:51.160851 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2025-09-06 00:40:51.160862 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-09-06 00:40:51.160873 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-09-06 00:40:51.160883 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-09-06 00:40:51.160894 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2025-09-06 00:40:51.160931 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-09-06 00:40:51.518072 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-09-06 00:40:51.518174 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2025-09-06 00:40:51.518187 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-09-06 00:40:51.518199 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-09-06 00:40:51.518209 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.75s 2025-09-06 00:40:51.518220 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-09-06 00:40:51.518231 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-09-06 00:40:51.518241 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.71s 2025-09-06 00:40:51.518252 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.69s 2025-09-06 00:40:51.518262 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.68s 2025-09-06 00:40:51.518273 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-09-06 00:41:03.769440 | orchestrator | 2025-09-06 00:41:03 | INFO  | Task 94d87ef3-2eab-447d-b83a-a9de5ea3b53f (facts) was prepared for execution. 2025-09-06 00:41:03.769577 | orchestrator | 2025-09-06 00:41:03 | INFO  | It takes a moment until task 94d87ef3-2eab-447d-b83a-a9de5ea3b53f (facts) has been started and output is visible here. 2025-09-06 00:41:15.862777 | orchestrator | 2025-09-06 00:41:15.862915 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-06 00:41:15.862929 | orchestrator | 2025-09-06 00:41:15.862936 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-06 00:41:15.862944 | orchestrator | Saturday 06 September 2025 00:41:07 +0000 (0:00:00.265) 0:00:00.265 **** 2025-09-06 00:41:15.862951 | orchestrator | ok: [testbed-manager] 2025-09-06 00:41:15.862958 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:41:15.862990 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:41:15.862997 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:41:15.863004 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:41:15.863010 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:41:15.863017 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:41:15.863023 | orchestrator | 2025-09-06 00:41:15.863030 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-06 00:41:15.863037 | orchestrator | Saturday 06 September 2025 00:41:09 +0000 (0:00:01.101) 0:00:01.367 **** 2025-09-06 00:41:15.863056 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:41:15.863064 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:41:15.863071 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:41:15.863078 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:41:15.863084 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:41:15.863090 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:41:15.863097 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:41:15.863103 | orchestrator | 2025-09-06 00:41:15.863110 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-06 00:41:15.863116 | orchestrator | 2025-09-06 00:41:15.863123 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-06 00:41:15.863130 | orchestrator | Saturday 06 September 2025 00:41:10 +0000 (0:00:01.236) 0:00:02.603 **** 2025-09-06 00:41:15.863136 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:41:15.863143 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:41:15.863149 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:41:15.863156 | orchestrator | ok: [testbed-manager] 2025-09-06 00:41:15.863162 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:41:15.863168 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:41:15.863175 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:41:15.863181 | orchestrator | 2025-09-06 00:41:15.863188 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-06 00:41:15.863194 | orchestrator | 2025-09-06 00:41:15.863201 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-06 00:41:15.863207 | orchestrator | Saturday 06 September 2025 00:41:15 +0000 (0:00:04.778) 0:00:07.382 **** 2025-09-06 00:41:15.863214 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:41:15.863221 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:41:15.863227 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:41:15.863233 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:41:15.863240 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:41:15.863246 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:41:15.863253 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:41:15.863259 | orchestrator | 2025-09-06 00:41:15.863266 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:41:15.863273 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863281 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863287 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863294 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863300 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863307 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863314 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:41:15.863329 | orchestrator | 2025-09-06 00:41:15.863337 | orchestrator | 2025-09-06 00:41:15.863345 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:41:15.863353 | orchestrator | Saturday 06 September 2025 00:41:15 +0000 (0:00:00.489) 0:00:07.871 **** 2025-09-06 00:41:15.863361 | orchestrator | =============================================================================== 2025-09-06 00:41:15.863368 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-09-06 00:41:15.863376 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-09-06 00:41:15.863384 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-09-06 00:41:15.863391 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-06 00:41:28.196735 | orchestrator | 2025-09-06 00:41:28 | INFO  | Task 788a9237-9c48-4b5f-9d19-065b023fe167 (frr) was prepared for execution. 2025-09-06 00:41:28.196851 | orchestrator | 2025-09-06 00:41:28 | INFO  | It takes a moment until task 788a9237-9c48-4b5f-9d19-065b023fe167 (frr) has been started and output is visible here. 2025-09-06 00:41:53.923408 | orchestrator | 2025-09-06 00:41:53.923526 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-06 00:41:53.923543 | orchestrator | 2025-09-06 00:41:53.923555 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-06 00:41:53.923567 | orchestrator | Saturday 06 September 2025 00:41:32 +0000 (0:00:00.181) 0:00:00.181 **** 2025-09-06 00:41:53.923579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:41:53.923592 | orchestrator | 2025-09-06 00:41:53.923603 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-06 00:41:53.923614 | orchestrator | Saturday 06 September 2025 00:41:32 +0000 (0:00:00.168) 0:00:00.349 **** 2025-09-06 00:41:53.923625 | orchestrator | changed: [testbed-manager] 2025-09-06 00:41:53.923636 | orchestrator | 2025-09-06 00:41:53.923647 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-06 00:41:53.923658 | orchestrator | Saturday 06 September 2025 00:41:33 +0000 (0:00:01.014) 0:00:01.364 **** 2025-09-06 00:41:53.923669 | orchestrator | changed: [testbed-manager] 2025-09-06 00:41:53.923679 | orchestrator | 2025-09-06 00:41:53.923708 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-06 00:41:53.923719 | orchestrator | Saturday 06 September 2025 00:41:42 +0000 (0:00:09.343) 0:00:10.708 **** 2025-09-06 00:41:53.923730 | orchestrator | ok: [testbed-manager] 2025-09-06 00:41:53.923742 | orchestrator | 2025-09-06 00:41:53.923753 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-06 00:41:53.923764 | orchestrator | Saturday 06 September 2025 00:41:43 +0000 (0:00:01.263) 0:00:11.971 **** 2025-09-06 00:41:53.923774 | orchestrator | changed: [testbed-manager] 2025-09-06 00:41:53.923785 | orchestrator | 2025-09-06 00:41:53.923796 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-06 00:41:53.923806 | orchestrator | Saturday 06 September 2025 00:41:45 +0000 (0:00:01.953) 0:00:13.925 **** 2025-09-06 00:41:53.923817 | orchestrator | ok: [testbed-manager] 2025-09-06 00:41:53.923895 | orchestrator | 2025-09-06 00:41:53.923908 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-06 00:41:53.923919 | orchestrator | Saturday 06 September 2025 00:41:46 +0000 (0:00:01.118) 0:00:15.044 **** 2025-09-06 00:41:53.923930 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:41:53.923943 | orchestrator | 2025-09-06 00:41:53.923957 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-06 00:41:53.923970 | orchestrator | Saturday 06 September 2025 00:41:47 +0000 (0:00:00.798) 0:00:15.842 **** 2025-09-06 00:41:53.923983 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:41:53.923996 | orchestrator | 2025-09-06 00:41:53.924009 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-06 00:41:53.924049 | orchestrator | Saturday 06 September 2025 00:41:47 +0000 (0:00:00.160) 0:00:16.003 **** 2025-09-06 00:41:53.924062 | orchestrator | changed: [testbed-manager] 2025-09-06 00:41:53.924074 | orchestrator | 2025-09-06 00:41:53.924087 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-06 00:41:53.924100 | orchestrator | Saturday 06 September 2025 00:41:48 +0000 (0:00:00.951) 0:00:16.954 **** 2025-09-06 00:41:53.924113 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-06 00:41:53.924125 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-06 00:41:53.924140 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-06 00:41:53.924152 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-06 00:41:53.924164 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-06 00:41:53.924177 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-06 00:41:53.924189 | orchestrator | 2025-09-06 00:41:53.924201 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-06 00:41:53.924214 | orchestrator | Saturday 06 September 2025 00:41:50 +0000 (0:00:02.115) 0:00:19.070 **** 2025-09-06 00:41:53.924226 | orchestrator | ok: [testbed-manager] 2025-09-06 00:41:53.924239 | orchestrator | 2025-09-06 00:41:53.924252 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-06 00:41:53.924264 | orchestrator | Saturday 06 September 2025 00:41:52 +0000 (0:00:01.341) 0:00:20.411 **** 2025-09-06 00:41:53.924277 | orchestrator | changed: [testbed-manager] 2025-09-06 00:41:53.924291 | orchestrator | 2025-09-06 00:41:53.924302 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:41:53.924313 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:41:53.924324 | orchestrator | 2025-09-06 00:41:53.924334 | orchestrator | 2025-09-06 00:41:53.924345 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:41:53.924356 | orchestrator | Saturday 06 September 2025 00:41:53 +0000 (0:00:01.428) 0:00:21.840 **** 2025-09-06 00:41:53.924366 | orchestrator | =============================================================================== 2025-09-06 00:41:53.924377 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.34s 2025-09-06 00:41:53.924388 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.12s 2025-09-06 00:41:53.924398 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.95s 2025-09-06 00:41:53.924409 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2025-09-06 00:41:53.924436 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.34s 2025-09-06 00:41:53.924448 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.26s 2025-09-06 00:41:53.924458 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.12s 2025-09-06 00:41:53.924469 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.01s 2025-09-06 00:41:53.924480 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.95s 2025-09-06 00:41:53.924490 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-06 00:41:53.924501 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2025-09-06 00:41:53.924512 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-06 00:41:54.229348 | orchestrator | 2025-09-06 00:41:54.231985 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 6 00:41:54 UTC 2025 2025-09-06 00:41:54.232053 | orchestrator | 2025-09-06 00:41:56.105178 | orchestrator | 2025-09-06 00:41:56 | INFO  | Collection nutshell is prepared for execution 2025-09-06 00:41:56.105298 | orchestrator | 2025-09-06 00:41:56 | INFO  | D [0] - dotfiles 2025-09-06 00:42:06.162410 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [0] - homer 2025-09-06 00:42:06.162526 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [0] - netdata 2025-09-06 00:42:06.162541 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [0] - openstackclient 2025-09-06 00:42:06.162553 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [0] - phpmyadmin 2025-09-06 00:42:06.162853 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [0] - common 2025-09-06 00:42:06.166538 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [1] -- loadbalancer 2025-09-06 00:42:06.166935 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [2] --- opensearch 2025-09-06 00:42:06.166958 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [2] --- mariadb-ng 2025-09-06 00:42:06.167201 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [3] ---- horizon 2025-09-06 00:42:06.167440 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [3] ---- keystone 2025-09-06 00:42:06.167570 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [4] ----- neutron 2025-09-06 00:42:06.167957 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ wait-for-nova 2025-09-06 00:42:06.167981 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [5] ------ octavia 2025-09-06 00:42:06.169470 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- barbican 2025-09-06 00:42:06.169747 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- designate 2025-09-06 00:42:06.169785 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- ironic 2025-09-06 00:42:06.169797 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- placement 2025-09-06 00:42:06.169842 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- magnum 2025-09-06 00:42:06.170449 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [1] -- openvswitch 2025-09-06 00:42:06.170593 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [2] --- ovn 2025-09-06 00:42:06.170899 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [1] -- memcached 2025-09-06 00:42:06.170919 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [1] -- redis 2025-09-06 00:42:06.171104 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [1] -- rabbitmq-ng 2025-09-06 00:42:06.171362 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [0] - kubernetes 2025-09-06 00:42:06.173553 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [1] -- kubeconfig 2025-09-06 00:42:06.173573 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [1] -- copy-kubeconfig 2025-09-06 00:42:06.173894 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [0] - ceph 2025-09-06 00:42:06.176010 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [1] -- ceph-pools 2025-09-06 00:42:06.176040 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [2] --- copy-ceph-keys 2025-09-06 00:42:06.176052 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [3] ---- cephclient 2025-09-06 00:42:06.176063 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-06 00:42:06.176471 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [4] ----- wait-for-keystone 2025-09-06 00:42:06.176564 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-06 00:42:06.176579 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ glance 2025-09-06 00:42:06.176591 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ cinder 2025-09-06 00:42:06.176608 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ nova 2025-09-06 00:42:06.176942 | orchestrator | 2025-09-06 00:42:06 | INFO  | A [4] ----- prometheus 2025-09-06 00:42:06.176964 | orchestrator | 2025-09-06 00:42:06 | INFO  | D [5] ------ grafana 2025-09-06 00:42:06.385614 | orchestrator | 2025-09-06 00:42:06 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-06 00:42:06.385715 | orchestrator | 2025-09-06 00:42:06 | INFO  | Tasks are running in the background 2025-09-06 00:42:08.994255 | orchestrator | 2025-09-06 00:42:08 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-06 00:42:11.107155 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:11.107345 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:11.108036 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:11.110532 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:11.110930 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:11.111576 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:11.112106 | orchestrator | 2025-09-06 00:42:11 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:11.112130 | orchestrator | 2025-09-06 00:42:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:14.188586 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:14.188695 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:14.188711 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:14.189151 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:14.189568 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:14.189998 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:14.190538 | orchestrator | 2025-09-06 00:42:14 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:14.190562 | orchestrator | 2025-09-06 00:42:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:17.218284 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:17.222412 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:17.226673 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:17.229103 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:17.229126 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:17.229138 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:17.229149 | orchestrator | 2025-09-06 00:42:17 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:17.229160 | orchestrator | 2025-09-06 00:42:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:20.354064 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:20.354174 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:20.354190 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:20.354201 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:20.354212 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:20.354223 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:20.354233 | orchestrator | 2025-09-06 00:42:20 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:20.354244 | orchestrator | 2025-09-06 00:42:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:23.400207 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:23.404278 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:23.406506 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:23.408773 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:23.411011 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:23.413818 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:23.416147 | orchestrator | 2025-09-06 00:42:23 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:23.416491 | orchestrator | 2025-09-06 00:42:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:26.457677 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:26.459092 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:26.463423 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:26.466161 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:26.469885 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:26.470329 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:26.470987 | orchestrator | 2025-09-06 00:42:26 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:26.471009 | orchestrator | 2025-09-06 00:42:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:29.569435 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:29.572033 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:29.576455 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:29.577968 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:29.593429 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:29.593503 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state STARTED 2025-09-06 00:42:29.593518 | orchestrator | 2025-09-06 00:42:29 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:29.593530 | orchestrator | 2025-09-06 00:42:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:32.658758 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:32.658908 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:32.658926 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:32.661834 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:32.662207 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:32.667025 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:32.667053 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task 17f44a29-bf61-4626-9625-53402673215c is in state SUCCESS 2025-09-06 00:42:32.667391 | orchestrator | 2025-09-06 00:42:32.667416 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-06 00:42:32.667427 | orchestrator | 2025-09-06 00:42:32.667439 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-06 00:42:32.667450 | orchestrator | Saturday 06 September 2025 00:42:17 +0000 (0:00:00.485) 0:00:00.485 **** 2025-09-06 00:42:32.667544 | orchestrator | changed: [testbed-manager] 2025-09-06 00:42:32.667560 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:42:32.667572 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:42:32.667583 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:42:32.667594 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:42:32.667606 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:42:32.667617 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:42:32.667628 | orchestrator | 2025-09-06 00:42:32.667640 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-06 00:42:32.667651 | orchestrator | Saturday 06 September 2025 00:42:21 +0000 (0:00:03.755) 0:00:04.241 **** 2025-09-06 00:42:32.667663 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-06 00:42:32.667674 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-06 00:42:32.667685 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-06 00:42:32.667696 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-06 00:42:32.667708 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-06 00:42:32.667719 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-06 00:42:32.667730 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-06 00:42:32.667741 | orchestrator | 2025-09-06 00:42:32.667753 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-06 00:42:32.667764 | orchestrator | Saturday 06 September 2025 00:42:22 +0000 (0:00:01.484) 0:00:05.726 **** 2025-09-06 00:42:32.667815 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:21.694443', 'end': '2025-09-06 00:42:21.698340', 'delta': '0:00:00.003897', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667859 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:22.186214', 'end': '2025-09-06 00:42:22.195735', 'delta': '0:00:00.009521', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667873 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:22.308963', 'end': '2025-09-06 00:42:22.318810', 'delta': '0:00:00.009847', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667898 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:21.912519', 'end': '2025-09-06 00:42:21.920033', 'delta': '0:00:00.007514', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667910 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:22.538139', 'end': '2025-09-06 00:42:22.546375', 'delta': '0:00:00.008236', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667929 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:22.606399', 'end': '2025-09-06 00:42:22.614545', 'delta': '0:00:00.008146', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667967 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-06 00:42:22.660731', 'end': '2025-09-06 00:42:22.669978', 'delta': '0:00:00.009247', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-06 00:42:32.667988 | orchestrator | 2025-09-06 00:42:32.668003 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-06 00:42:32.668014 | orchestrator | Saturday 06 September 2025 00:42:24 +0000 (0:00:01.961) 0:00:07.687 **** 2025-09-06 00:42:32.668025 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-06 00:42:32.668036 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-06 00:42:32.668047 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-06 00:42:32.668057 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-06 00:42:32.668068 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-06 00:42:32.668079 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-06 00:42:32.668089 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-06 00:42:32.668100 | orchestrator | 2025-09-06 00:42:32.668111 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-06 00:42:32.668122 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:01.638) 0:00:09.325 **** 2025-09-06 00:42:32.668133 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-06 00:42:32.668144 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-06 00:42:32.668154 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-06 00:42:32.668165 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-06 00:42:32.668179 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-06 00:42:32.668192 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-06 00:42:32.668206 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-06 00:42:32.668219 | orchestrator | 2025-09-06 00:42:32.668233 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:42:32.668255 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668270 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668284 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668296 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668309 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668330 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668343 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:42:32.668355 | orchestrator | 2025-09-06 00:42:32.668368 | orchestrator | 2025-09-06 00:42:32.668380 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:42:32.668393 | orchestrator | Saturday 06 September 2025 00:42:29 +0000 (0:00:03.113) 0:00:12.439 **** 2025-09-06 00:42:32.668406 | orchestrator | =============================================================================== 2025-09-06 00:42:32.668418 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.76s 2025-09-06 00:42:32.668430 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.11s 2025-09-06 00:42:32.668444 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.96s 2025-09-06 00:42:32.668462 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.64s 2025-09-06 00:42:32.668475 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.48s 2025-09-06 00:42:32.668488 | orchestrator | 2025-09-06 00:42:32 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:32.670132 | orchestrator | 2025-09-06 00:42:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:35.849963 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:35.854342 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:35.857427 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:35.863288 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:35.867474 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:35.868133 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:35.870972 | orchestrator | 2025-09-06 00:42:35 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:35.870996 | orchestrator | 2025-09-06 00:42:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:38.999227 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:38.999325 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:38.999719 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:39.000439 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:39.000825 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:39.001345 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:39.001957 | orchestrator | 2025-09-06 00:42:39 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:39.002103 | orchestrator | 2025-09-06 00:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:42.029414 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:42.029539 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:42.029970 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:42.030873 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:42.031308 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:42.031663 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:42.032264 | orchestrator | 2025-09-06 00:42:42 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:42.032285 | orchestrator | 2025-09-06 00:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:45.079099 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:45.081916 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:45.083739 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:45.092192 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:45.095225 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:45.096945 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:45.098692 | orchestrator | 2025-09-06 00:42:45 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:45.099222 | orchestrator | 2025-09-06 00:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:48.143500 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:48.146427 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:48.147164 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:48.148964 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:48.149820 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:48.155113 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:48.155138 | orchestrator | 2025-09-06 00:42:48 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:48.155150 | orchestrator | 2025-09-06 00:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:51.216811 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:51.216903 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:51.216917 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:51.216929 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:51.216939 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:51.216971 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:51.216983 | orchestrator | 2025-09-06 00:42:51 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:51.216994 | orchestrator | 2025-09-06 00:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:54.263859 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:54.263951 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:54.263975 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:54.263987 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state STARTED 2025-09-06 00:42:54.263998 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:54.264009 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:54.264019 | orchestrator | 2025-09-06 00:42:54 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:54.264030 | orchestrator | 2025-09-06 00:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:42:57.325355 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:42:57.326148 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state STARTED 2025-09-06 00:42:57.328612 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:42:57.329767 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task 7d082452-998e-4860-9e11-c39d2c961a30 is in state SUCCESS 2025-09-06 00:42:57.330408 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:42:57.332299 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:42:57.336768 | orchestrator | 2025-09-06 00:42:57 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:42:57.336870 | orchestrator | 2025-09-06 00:42:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:00.456526 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:00.456617 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task ce876636-3605-40c8-a72e-c4f51bb47d92 is in state SUCCESS 2025-09-06 00:43:00.456933 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:00.457806 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:00.460040 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:00.460065 | orchestrator | 2025-09-06 00:43:00 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:00.460076 | orchestrator | 2025-09-06 00:43:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:03.564471 | orchestrator | 2025-09-06 00:43:03 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:03.564558 | orchestrator | 2025-09-06 00:43:03 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:03.564594 | orchestrator | 2025-09-06 00:43:03 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:03.564607 | orchestrator | 2025-09-06 00:43:03 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:03.564618 | orchestrator | 2025-09-06 00:43:03 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:03.564629 | orchestrator | 2025-09-06 00:43:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:06.629273 | orchestrator | 2025-09-06 00:43:06 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:06.629363 | orchestrator | 2025-09-06 00:43:06 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:06.629378 | orchestrator | 2025-09-06 00:43:06 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:06.629390 | orchestrator | 2025-09-06 00:43:06 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:06.629400 | orchestrator | 2025-09-06 00:43:06 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:06.629412 | orchestrator | 2025-09-06 00:43:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:09.660524 | orchestrator | 2025-09-06 00:43:09 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:09.661660 | orchestrator | 2025-09-06 00:43:09 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:09.663493 | orchestrator | 2025-09-06 00:43:09 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:09.665123 | orchestrator | 2025-09-06 00:43:09 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:09.667072 | orchestrator | 2025-09-06 00:43:09 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:09.667406 | orchestrator | 2025-09-06 00:43:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:12.704909 | orchestrator | 2025-09-06 00:43:12 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:12.705614 | orchestrator | 2025-09-06 00:43:12 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:12.707959 | orchestrator | 2025-09-06 00:43:12 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:12.708660 | orchestrator | 2025-09-06 00:43:12 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:12.709631 | orchestrator | 2025-09-06 00:43:12 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:12.709666 | orchestrator | 2025-09-06 00:43:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:15.776396 | orchestrator | 2025-09-06 00:43:15 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:15.781264 | orchestrator | 2025-09-06 00:43:15 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:15.781294 | orchestrator | 2025-09-06 00:43:15 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:15.785113 | orchestrator | 2025-09-06 00:43:15 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:15.785137 | orchestrator | 2025-09-06 00:43:15 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:15.785149 | orchestrator | 2025-09-06 00:43:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:18.809250 | orchestrator | 2025-09-06 00:43:18 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:18.829001 | orchestrator | 2025-09-06 00:43:18 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state STARTED 2025-09-06 00:43:18.830308 | orchestrator | 2025-09-06 00:43:18 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:18.834817 | orchestrator | 2025-09-06 00:43:18 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:18.836122 | orchestrator | 2025-09-06 00:43:18 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:18.836143 | orchestrator | 2025-09-06 00:43:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:21.872110 | orchestrator | 2025-09-06 00:43:21 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:21.873937 | orchestrator | 2025-09-06 00:43:21.873983 | orchestrator | 2025-09-06 00:43:21.873996 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-06 00:43:21.874008 | orchestrator | 2025-09-06 00:43:21.874101 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-06 00:43:21.874118 | orchestrator | Saturday 06 September 2025 00:42:18 +0000 (0:00:00.465) 0:00:00.465 **** 2025-09-06 00:43:21.874131 | orchestrator | ok: [testbed-manager] => { 2025-09-06 00:43:21.874145 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-06 00:43:21.874158 | orchestrator | } 2025-09-06 00:43:21.874169 | orchestrator | 2025-09-06 00:43:21.874181 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-06 00:43:21.874199 | orchestrator | Saturday 06 September 2025 00:42:18 +0000 (0:00:00.184) 0:00:00.649 **** 2025-09-06 00:43:21.874211 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.874222 | orchestrator | 2025-09-06 00:43:21.874233 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-06 00:43:21.874244 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:01.707) 0:00:02.357 **** 2025-09-06 00:43:21.874255 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-06 00:43:21.874265 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-06 00:43:21.874277 | orchestrator | 2025-09-06 00:43:21.874287 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-06 00:43:21.874298 | orchestrator | Saturday 06 September 2025 00:42:21 +0000 (0:00:00.769) 0:00:03.127 **** 2025-09-06 00:43:21.874309 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.874320 | orchestrator | 2025-09-06 00:43:21.874357 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-06 00:43:21.874369 | orchestrator | Saturday 06 September 2025 00:42:23 +0000 (0:00:02.322) 0:00:05.449 **** 2025-09-06 00:43:21.874380 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.874391 | orchestrator | 2025-09-06 00:43:21.874402 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-06 00:43:21.874413 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:02.487) 0:00:07.944 **** 2025-09-06 00:43:21.874424 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-06 00:43:21.874435 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.874446 | orchestrator | 2025-09-06 00:43:21.874457 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-06 00:43:21.874468 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:25.373) 0:00:33.317 **** 2025-09-06 00:43:21.874479 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.874490 | orchestrator | 2025-09-06 00:43:21.874501 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:43:21.874515 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.874550 | orchestrator | 2025-09-06 00:43:21.874563 | orchestrator | 2025-09-06 00:43:21.874575 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:43:21.874588 | orchestrator | Saturday 06 September 2025 00:42:54 +0000 (0:00:02.556) 0:00:35.874 **** 2025-09-06 00:43:21.874600 | orchestrator | =============================================================================== 2025-09-06 00:43:21.874613 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.37s 2025-09-06 00:43:21.874626 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.56s 2025-09-06 00:43:21.874638 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.49s 2025-09-06 00:43:21.874651 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.32s 2025-09-06 00:43:21.874663 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.71s 2025-09-06 00:43:21.874713 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.77s 2025-09-06 00:43:21.874725 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.18s 2025-09-06 00:43:21.874737 | orchestrator | 2025-09-06 00:43:21.874750 | orchestrator | 2025-09-06 00:43:21.874762 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-06 00:43:21.874776 | orchestrator | 2025-09-06 00:43:21.874788 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-06 00:43:21.874801 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:00.503) 0:00:00.503 **** 2025-09-06 00:43:21.874813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-06 00:43:21.874827 | orchestrator | 2025-09-06 00:43:21.874841 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-06 00:43:21.874853 | orchestrator | Saturday 06 September 2025 00:42:17 +0000 (0:00:00.642) 0:00:01.146 **** 2025-09-06 00:43:21.874866 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-06 00:43:21.874877 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-06 00:43:21.874888 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-06 00:43:21.874899 | orchestrator | 2025-09-06 00:43:21.874910 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-06 00:43:21.874920 | orchestrator | Saturday 06 September 2025 00:42:19 +0000 (0:00:01.874) 0:00:03.020 **** 2025-09-06 00:43:21.874931 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.874942 | orchestrator | 2025-09-06 00:43:21.874953 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-06 00:43:21.874963 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:01.493) 0:00:04.514 **** 2025-09-06 00:43:21.874988 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-06 00:43:21.875000 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.875010 | orchestrator | 2025-09-06 00:43:21.875021 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-06 00:43:21.875032 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:31.152) 0:00:35.666 **** 2025-09-06 00:43:21.875042 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.875053 | orchestrator | 2025-09-06 00:43:21.875064 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-06 00:43:21.875075 | orchestrator | Saturday 06 September 2025 00:42:53 +0000 (0:00:01.928) 0:00:37.594 **** 2025-09-06 00:43:21.875085 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.875096 | orchestrator | 2025-09-06 00:43:21.875111 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-06 00:43:21.875122 | orchestrator | Saturday 06 September 2025 00:42:54 +0000 (0:00:00.911) 0:00:38.506 **** 2025-09-06 00:43:21.875133 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.875152 | orchestrator | 2025-09-06 00:43:21.875163 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-06 00:43:21.875173 | orchestrator | Saturday 06 September 2025 00:42:57 +0000 (0:00:02.520) 0:00:41.026 **** 2025-09-06 00:43:21.875184 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.875194 | orchestrator | 2025-09-06 00:43:21.875205 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-06 00:43:21.875216 | orchestrator | Saturday 06 September 2025 00:42:57 +0000 (0:00:00.765) 0:00:41.792 **** 2025-09-06 00:43:21.875227 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.875237 | orchestrator | 2025-09-06 00:43:21.875248 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-06 00:43:21.875259 | orchestrator | Saturday 06 September 2025 00:42:58 +0000 (0:00:01.006) 0:00:42.798 **** 2025-09-06 00:43:21.875269 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.875280 | orchestrator | 2025-09-06 00:43:21.875290 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:43:21.875301 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.875312 | orchestrator | 2025-09-06 00:43:21.875323 | orchestrator | 2025-09-06 00:43:21.875334 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:43:21.875344 | orchestrator | Saturday 06 September 2025 00:42:59 +0000 (0:00:00.423) 0:00:43.222 **** 2025-09-06 00:43:21.875355 | orchestrator | =============================================================================== 2025-09-06 00:43:21.875366 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.15s 2025-09-06 00:43:21.875376 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.52s 2025-09-06 00:43:21.875387 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.93s 2025-09-06 00:43:21.875397 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.87s 2025-09-06 00:43:21.875408 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.49s 2025-09-06 00:43:21.875418 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.01s 2025-09-06 00:43:21.875429 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.91s 2025-09-06 00:43:21.875440 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.77s 2025-09-06 00:43:21.875451 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.64s 2025-09-06 00:43:21.875461 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-09-06 00:43:21.875472 | orchestrator | 2025-09-06 00:43:21.875482 | orchestrator | 2025-09-06 00:43:21.875493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:43:21.875504 | orchestrator | 2025-09-06 00:43:21.875514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:43:21.875525 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:00.512) 0:00:00.512 **** 2025-09-06 00:43:21.875535 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-06 00:43:21.875546 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-06 00:43:21.875557 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-06 00:43:21.875567 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-06 00:43:21.875578 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-06 00:43:21.875589 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-06 00:43:21.875599 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-06 00:43:21.875610 | orchestrator | 2025-09-06 00:43:21.875620 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-06 00:43:21.875631 | orchestrator | 2025-09-06 00:43:21.875641 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-06 00:43:21.875658 | orchestrator | Saturday 06 September 2025 00:42:19 +0000 (0:00:02.664) 0:00:03.176 **** 2025-09-06 00:43:21.875699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:43:21.875719 | orchestrator | 2025-09-06 00:43:21.875730 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-06 00:43:21.875741 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:01.280) 0:00:04.457 **** 2025-09-06 00:43:21.875752 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:43:21.875762 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:43:21.875773 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.875784 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:43:21.875795 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:43:21.875811 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:43:21.875822 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:43:21.875833 | orchestrator | 2025-09-06 00:43:21.875844 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-06 00:43:21.875855 | orchestrator | Saturday 06 September 2025 00:42:22 +0000 (0:00:02.090) 0:00:06.547 **** 2025-09-06 00:43:21.875865 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:43:21.875876 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:43:21.875887 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:43:21.875897 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:43:21.875908 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:43:21.875918 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:43:21.875929 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.875940 | orchestrator | 2025-09-06 00:43:21.875950 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-06 00:43:21.875965 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:03.431) 0:00:09.979 **** 2025-09-06 00:43:21.875976 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:43:21.875987 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:43:21.875997 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.876008 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:43:21.876019 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:43:21.876029 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:43:21.876040 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:43:21.876134 | orchestrator | 2025-09-06 00:43:21.876150 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-06 00:43:21.876161 | orchestrator | Saturday 06 September 2025 00:42:29 +0000 (0:00:03.463) 0:00:13.442 **** 2025-09-06 00:43:21.876172 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.876182 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:43:21.876193 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:43:21.876204 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:43:21.876214 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:43:21.876225 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:43:21.876235 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:43:21.876246 | orchestrator | 2025-09-06 00:43:21.876257 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-06 00:43:21.876268 | orchestrator | Saturday 06 September 2025 00:42:40 +0000 (0:00:10.485) 0:00:23.928 **** 2025-09-06 00:43:21.876278 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:43:21.876289 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:43:21.876300 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:43:21.876310 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:43:21.876321 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:43:21.876332 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:43:21.876419 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.876433 | orchestrator | 2025-09-06 00:43:21.876444 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-06 00:43:21.876455 | orchestrator | Saturday 06 September 2025 00:43:00 +0000 (0:00:20.782) 0:00:44.711 **** 2025-09-06 00:43:21.876479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:43:21.876492 | orchestrator | 2025-09-06 00:43:21.876503 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-06 00:43:21.876514 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.960) 0:00:45.671 **** 2025-09-06 00:43:21.876525 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-06 00:43:21.876536 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-06 00:43:21.876547 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-06 00:43:21.876558 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-06 00:43:21.876569 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-06 00:43:21.876579 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-06 00:43:21.876590 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-06 00:43:21.876601 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-06 00:43:21.876611 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-06 00:43:21.876622 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-06 00:43:21.876633 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-06 00:43:21.876644 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-06 00:43:21.876655 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-06 00:43:21.876665 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-06 00:43:21.876731 | orchestrator | 2025-09-06 00:43:21.876743 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-06 00:43:21.876754 | orchestrator | Saturday 06 September 2025 00:43:06 +0000 (0:00:04.489) 0:00:50.161 **** 2025-09-06 00:43:21.876765 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.876776 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:43:21.876787 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:43:21.876798 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:43:21.876809 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:43:21.876819 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:43:21.876830 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:43:21.876841 | orchestrator | 2025-09-06 00:43:21.876852 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-06 00:43:21.876863 | orchestrator | Saturday 06 September 2025 00:43:07 +0000 (0:00:00.984) 0:00:51.145 **** 2025-09-06 00:43:21.876874 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.876885 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:43:21.876896 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:43:21.876905 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:43:21.876915 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:43:21.876924 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:43:21.876934 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:43:21.876944 | orchestrator | 2025-09-06 00:43:21.876953 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-06 00:43:21.876971 | orchestrator | Saturday 06 September 2025 00:43:08 +0000 (0:00:01.321) 0:00:52.466 **** 2025-09-06 00:43:21.876981 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:43:21.876991 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:43:21.877001 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:43:21.877010 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:43:21.877020 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.877029 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:43:21.877039 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:43:21.877049 | orchestrator | 2025-09-06 00:43:21.877060 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-06 00:43:21.877072 | orchestrator | Saturday 06 September 2025 00:43:09 +0000 (0:00:01.185) 0:00:53.652 **** 2025-09-06 00:43:21.877090 | orchestrator | ok: [testbed-manager] 2025-09-06 00:43:21.877100 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:43:21.877111 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:43:21.877122 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:43:21.877138 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:43:21.877149 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:43:21.877160 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:43:21.877170 | orchestrator | 2025-09-06 00:43:21.877181 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-06 00:43:21.877192 | orchestrator | Saturday 06 September 2025 00:43:11 +0000 (0:00:01.791) 0:00:55.444 **** 2025-09-06 00:43:21.877203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-06 00:43:21.877216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:43:21.877228 | orchestrator | 2025-09-06 00:43:21.877239 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-06 00:43:21.877249 | orchestrator | Saturday 06 September 2025 00:43:13 +0000 (0:00:01.647) 0:00:57.092 **** 2025-09-06 00:43:21.877261 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.877387 | orchestrator | 2025-09-06 00:43:21.877400 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-06 00:43:21.877412 | orchestrator | Saturday 06 September 2025 00:43:15 +0000 (0:00:02.014) 0:00:59.106 **** 2025-09-06 00:43:21.877424 | orchestrator | changed: [testbed-manager] 2025-09-06 00:43:21.877434 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:43:21.877444 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:43:21.877453 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:43:21.877463 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:43:21.877473 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:43:21.877482 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:43:21.877492 | orchestrator | 2025-09-06 00:43:21.877501 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:43:21.877511 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877521 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877530 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877540 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877550 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877559 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877569 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:43:21.877578 | orchestrator | 2025-09-06 00:43:21.877587 | orchestrator | 2025-09-06 00:43:21.877597 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:43:21.877607 | orchestrator | Saturday 06 September 2025 00:43:18 +0000 (0:00:03.467) 0:01:02.573 **** 2025-09-06 00:43:21.877616 | orchestrator | =============================================================================== 2025-09-06 00:43:21.877626 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.78s 2025-09-06 00:43:21.877642 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.49s 2025-09-06 00:43:21.877652 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.49s 2025-09-06 00:43:21.877661 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.47s 2025-09-06 00:43:21.877688 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.46s 2025-09-06 00:43:21.877698 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.43s 2025-09-06 00:43:21.877707 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.66s 2025-09-06 00:43:21.877716 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2025-09-06 00:43:21.877726 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.01s 2025-09-06 00:43:21.877735 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.79s 2025-09-06 00:43:21.877745 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.65s 2025-09-06 00:43:21.877761 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2025-09-06 00:43:21.877771 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.28s 2025-09-06 00:43:21.877780 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.19s 2025-09-06 00:43:21.877790 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.98s 2025-09-06 00:43:21.877799 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 0.96s 2025-09-06 00:43:21.877809 | orchestrator | 2025-09-06 00:43:21 | INFO  | Task c444362d-001b-4e23-b863-4dcf1618d461 is in state SUCCESS 2025-09-06 00:43:21.877818 | orchestrator | 2025-09-06 00:43:21 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:21.877828 | orchestrator | 2025-09-06 00:43:21 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:21.877838 | orchestrator | 2025-09-06 00:43:21 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:21.877848 | orchestrator | 2025-09-06 00:43:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:24.916256 | orchestrator | 2025-09-06 00:43:24 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:24.916358 | orchestrator | 2025-09-06 00:43:24 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:24.916373 | orchestrator | 2025-09-06 00:43:24 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:24.917843 | orchestrator | 2025-09-06 00:43:24 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:24.917864 | orchestrator | 2025-09-06 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:27.959518 | orchestrator | 2025-09-06 00:43:27 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:27.960819 | orchestrator | 2025-09-06 00:43:27 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:27.963084 | orchestrator | 2025-09-06 00:43:27 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state STARTED 2025-09-06 00:43:27.964983 | orchestrator | 2025-09-06 00:43:27 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:27.965085 | orchestrator | 2025-09-06 00:43:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:30.998754 | orchestrator | 2025-09-06 00:43:30 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:30.999517 | orchestrator | 2025-09-06 00:43:31 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:31.000073 | orchestrator | 2025-09-06 00:43:31 | INFO  | Task 4100cc35-5597-4d81-90c7-f27651593c18 is in state SUCCESS 2025-09-06 00:43:31.001377 | orchestrator | 2025-09-06 00:43:31 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:31.001465 | orchestrator | 2025-09-06 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:34.046631 | orchestrator | 2025-09-06 00:43:34 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:34.048270 | orchestrator | 2025-09-06 00:43:34 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:34.051006 | orchestrator | 2025-09-06 00:43:34 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:34.051035 | orchestrator | 2025-09-06 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:37.097906 | orchestrator | 2025-09-06 00:43:37 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:37.099504 | orchestrator | 2025-09-06 00:43:37 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:37.101061 | orchestrator | 2025-09-06 00:43:37 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:37.101090 | orchestrator | 2025-09-06 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:40.138952 | orchestrator | 2025-09-06 00:43:40 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:40.139662 | orchestrator | 2025-09-06 00:43:40 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:40.140851 | orchestrator | 2025-09-06 00:43:40 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:40.140874 | orchestrator | 2025-09-06 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:43.180780 | orchestrator | 2025-09-06 00:43:43 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:43.181074 | orchestrator | 2025-09-06 00:43:43 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:43.181519 | orchestrator | 2025-09-06 00:43:43 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:43.181741 | orchestrator | 2025-09-06 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:46.238249 | orchestrator | 2025-09-06 00:43:46 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:46.238496 | orchestrator | 2025-09-06 00:43:46 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:46.239527 | orchestrator | 2025-09-06 00:43:46 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:46.239557 | orchestrator | 2025-09-06 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:49.281556 | orchestrator | 2025-09-06 00:43:49 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:49.282643 | orchestrator | 2025-09-06 00:43:49 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:49.284121 | orchestrator | 2025-09-06 00:43:49 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:49.284146 | orchestrator | 2025-09-06 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:52.320969 | orchestrator | 2025-09-06 00:43:52 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:52.323236 | orchestrator | 2025-09-06 00:43:52 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:52.328076 | orchestrator | 2025-09-06 00:43:52 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:52.329094 | orchestrator | 2025-09-06 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:55.377641 | orchestrator | 2025-09-06 00:43:55 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:55.378642 | orchestrator | 2025-09-06 00:43:55 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:55.379491 | orchestrator | 2025-09-06 00:43:55 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:55.379527 | orchestrator | 2025-09-06 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:43:58.412054 | orchestrator | 2025-09-06 00:43:58 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:43:58.412922 | orchestrator | 2025-09-06 00:43:58 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:43:58.414008 | orchestrator | 2025-09-06 00:43:58 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:43:58.414090 | orchestrator | 2025-09-06 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:01.459167 | orchestrator | 2025-09-06 00:44:01 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:01.461038 | orchestrator | 2025-09-06 00:44:01 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:01.462419 | orchestrator | 2025-09-06 00:44:01 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:01.462447 | orchestrator | 2025-09-06 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:04.501394 | orchestrator | 2025-09-06 00:44:04 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:04.503058 | orchestrator | 2025-09-06 00:44:04 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:04.504418 | orchestrator | 2025-09-06 00:44:04 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:04.504442 | orchestrator | 2025-09-06 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:07.541529 | orchestrator | 2025-09-06 00:44:07 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:07.543175 | orchestrator | 2025-09-06 00:44:07 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:07.544943 | orchestrator | 2025-09-06 00:44:07 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:07.544974 | orchestrator | 2025-09-06 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:10.595244 | orchestrator | 2025-09-06 00:44:10 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:10.595350 | orchestrator | 2025-09-06 00:44:10 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:10.595365 | orchestrator | 2025-09-06 00:44:10 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:10.595377 | orchestrator | 2025-09-06 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:13.644421 | orchestrator | 2025-09-06 00:44:13 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:13.646934 | orchestrator | 2025-09-06 00:44:13 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:13.649742 | orchestrator | 2025-09-06 00:44:13 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:13.650851 | orchestrator | 2025-09-06 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:16.691240 | orchestrator | 2025-09-06 00:44:16 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:16.691363 | orchestrator | 2025-09-06 00:44:16 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:16.692192 | orchestrator | 2025-09-06 00:44:16 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:16.692224 | orchestrator | 2025-09-06 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:19.733092 | orchestrator | 2025-09-06 00:44:19 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:19.733187 | orchestrator | 2025-09-06 00:44:19 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:19.733199 | orchestrator | 2025-09-06 00:44:19 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:19.733210 | orchestrator | 2025-09-06 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:22.787264 | orchestrator | 2025-09-06 00:44:22 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:22.787962 | orchestrator | 2025-09-06 00:44:22 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:22.788685 | orchestrator | 2025-09-06 00:44:22 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:22.789007 | orchestrator | 2025-09-06 00:44:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:25.851185 | orchestrator | 2025-09-06 00:44:25 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:25.852687 | orchestrator | 2025-09-06 00:44:25 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:25.859025 | orchestrator | 2025-09-06 00:44:25 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:25.859048 | orchestrator | 2025-09-06 00:44:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:28.905903 | orchestrator | 2025-09-06 00:44:28 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:28.907864 | orchestrator | 2025-09-06 00:44:28 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:28.908777 | orchestrator | 2025-09-06 00:44:28 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:28.908799 | orchestrator | 2025-09-06 00:44:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:31.958220 | orchestrator | 2025-09-06 00:44:31 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:31.959172 | orchestrator | 2025-09-06 00:44:31 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:31.960599 | orchestrator | 2025-09-06 00:44:31 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:31.960629 | orchestrator | 2025-09-06 00:44:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:35.023943 | orchestrator | 2025-09-06 00:44:35 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:35.026312 | orchestrator | 2025-09-06 00:44:35 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:35.027109 | orchestrator | 2025-09-06 00:44:35 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:35.027161 | orchestrator | 2025-09-06 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:38.070298 | orchestrator | 2025-09-06 00:44:38 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:38.071481 | orchestrator | 2025-09-06 00:44:38 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:38.073323 | orchestrator | 2025-09-06 00:44:38 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:38.073349 | orchestrator | 2025-09-06 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:41.105216 | orchestrator | 2025-09-06 00:44:41 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:41.105346 | orchestrator | 2025-09-06 00:44:41 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:41.105974 | orchestrator | 2025-09-06 00:44:41 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:41.106010 | orchestrator | 2025-09-06 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:44.151573 | orchestrator | 2025-09-06 00:44:44 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:44.151700 | orchestrator | 2025-09-06 00:44:44 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:44.152778 | orchestrator | 2025-09-06 00:44:44 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:44.152945 | orchestrator | 2025-09-06 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:47.190383 | orchestrator | 2025-09-06 00:44:47 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:47.191187 | orchestrator | 2025-09-06 00:44:47 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:47.192579 | orchestrator | 2025-09-06 00:44:47 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:47.192830 | orchestrator | 2025-09-06 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:50.232998 | orchestrator | 2025-09-06 00:44:50 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:50.235221 | orchestrator | 2025-09-06 00:44:50 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:50.236605 | orchestrator | 2025-09-06 00:44:50 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state STARTED 2025-09-06 00:44:50.236883 | orchestrator | 2025-09-06 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:53.271494 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:44:53.272064 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:53.272095 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:44:53.272960 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task c2362ba1-97a7-475c-842e-16de78d214e2 is in state STARTED 2025-09-06 00:44:53.273589 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:53.274276 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:44:53.278570 | orchestrator | 2025-09-06 00:44:53 | INFO  | Task 1202f4af-4807-45eb-b483-1c9d9c2259c0 is in state SUCCESS 2025-09-06 00:44:53.281252 | orchestrator | 2025-09-06 00:44:53.281598 | orchestrator | 2025-09-06 00:44:53.281618 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-06 00:44:53.281630 | orchestrator | 2025-09-06 00:44:53.281641 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-06 00:44:53.281652 | orchestrator | Saturday 06 September 2025 00:42:34 +0000 (0:00:00.191) 0:00:00.191 **** 2025-09-06 00:44:53.281663 | orchestrator | ok: [testbed-manager] 2025-09-06 00:44:53.281675 | orchestrator | 2025-09-06 00:44:53.281686 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-06 00:44:53.281697 | orchestrator | Saturday 06 September 2025 00:42:35 +0000 (0:00:00.807) 0:00:00.999 **** 2025-09-06 00:44:53.281708 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-06 00:44:53.281719 | orchestrator | 2025-09-06 00:44:53.281730 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-06 00:44:53.281741 | orchestrator | Saturday 06 September 2025 00:42:35 +0000 (0:00:00.552) 0:00:01.551 **** 2025-09-06 00:44:53.281752 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.281763 | orchestrator | 2025-09-06 00:44:53.281774 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-06 00:44:53.281785 | orchestrator | Saturday 06 September 2025 00:42:36 +0000 (0:00:01.061) 0:00:02.612 **** 2025-09-06 00:44:53.281795 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-06 00:44:53.281806 | orchestrator | ok: [testbed-manager] 2025-09-06 00:44:53.281817 | orchestrator | 2025-09-06 00:44:53.281828 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-06 00:44:53.281839 | orchestrator | Saturday 06 September 2025 00:43:25 +0000 (0:00:48.490) 0:00:51.103 **** 2025-09-06 00:44:53.281849 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.281860 | orchestrator | 2025-09-06 00:44:53.281871 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:44:53.281882 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:44:53.281894 | orchestrator | 2025-09-06 00:44:53.281905 | orchestrator | 2025-09-06 00:44:53.281916 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:44:53.281935 | orchestrator | Saturday 06 September 2025 00:43:29 +0000 (0:00:04.238) 0:00:55.341 **** 2025-09-06 00:44:53.281946 | orchestrator | =============================================================================== 2025-09-06 00:44:53.281957 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.49s 2025-09-06 00:44:53.281968 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.24s 2025-09-06 00:44:53.281979 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.06s 2025-09-06 00:44:53.281990 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.81s 2025-09-06 00:44:53.282001 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2025-09-06 00:44:53.282011 | orchestrator | 2025-09-06 00:44:53.282091 | orchestrator | 2025-09-06 00:44:53.282103 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-06 00:44:53.282114 | orchestrator | 2025-09-06 00:44:53.282125 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-06 00:44:53.282135 | orchestrator | Saturday 06 September 2025 00:42:10 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-06 00:44:53.282146 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:44:53.282159 | orchestrator | 2025-09-06 00:44:53.282169 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-06 00:44:53.282180 | orchestrator | Saturday 06 September 2025 00:42:11 +0000 (0:00:01.220) 0:00:01.437 **** 2025-09-06 00:44:53.282190 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282209 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282220 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282230 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282241 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282252 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282262 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282276 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282295 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282320 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282343 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282361 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-06 00:44:53.282377 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282393 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282409 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282427 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282539 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282562 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-06 00:44:53.282580 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282598 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282617 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-06 00:44:53.282635 | orchestrator | 2025-09-06 00:44:53.282653 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-06 00:44:53.282672 | orchestrator | Saturday 06 September 2025 00:42:15 +0000 (0:00:03.720) 0:00:05.158 **** 2025-09-06 00:44:53.282684 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:44:53.282696 | orchestrator | 2025-09-06 00:44:53.282707 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-06 00:44:53.282717 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:01.253) 0:00:06.412 **** 2025-09-06 00:44:53.282734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282759 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282858 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.282872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.282883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.282923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.282935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.282947 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.282976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.283125 | orchestrator | 2025-09-06 00:44:53.283137 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-06 00:44:53.283148 | orchestrator | Saturday 06 September 2025 00:42:22 +0000 (0:00:05.397) 0:00:11.809 **** 2025-09-06 00:44:53.283194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283254 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:44:53.283265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283311 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:44:53.283329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283341 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:44:53.283353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283484 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:44:53.283495 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:44:53.283558 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:44:53.283576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283611 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:44:53.283622 | orchestrator | 2025-09-06 00:44:53.283633 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-06 00:44:53.283644 | orchestrator | Saturday 06 September 2025 00:42:23 +0000 (0:00:01.667) 0:00:13.477 **** 2025-09-06 00:44:53.283655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283742 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:44:53.283753 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:44:53.283765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283810 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:44:53.283829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283886 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:44:53.283897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283908 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:44:53.283919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.283966 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:44:53.283977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-06 00:44:53.283989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.284008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.284019 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:44:53.284030 | orchestrator | 2025-09-06 00:44:53.284041 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-06 00:44:53.284050 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:02.785) 0:00:16.262 **** 2025-09-06 00:44:53.284060 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:44:53.284069 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:44:53.284079 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:44:53.284088 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:44:53.284098 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:44:53.284107 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:44:53.284117 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:44:53.284126 | orchestrator | 2025-09-06 00:44:53.284136 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-06 00:44:53.284146 | orchestrator | Saturday 06 September 2025 00:42:27 +0000 (0:00:01.047) 0:00:17.310 **** 2025-09-06 00:44:53.284155 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:44:53.284165 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:44:53.284174 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:44:53.284184 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:44:53.284193 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:44:53.284203 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:44:53.284212 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:44:53.284221 | orchestrator | 2025-09-06 00:44:53.284231 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-06 00:44:53.284241 | orchestrator | Saturday 06 September 2025 00:42:29 +0000 (0:00:01.368) 0:00:18.679 **** 2025-09-06 00:44:53.284250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284267 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.284337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284395 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284457 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.284536 | orchestrator | 2025-09-06 00:44:53.284546 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-06 00:44:53.284555 | orchestrator | Saturday 06 September 2025 00:42:34 +0000 (0:00:05.451) 0:00:24.130 **** 2025-09-06 00:44:53.284565 | orchestrator | [WARNING]: Skipped 2025-09-06 00:44:53.284576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-06 00:44:53.284592 | orchestrator | to this access issue: 2025-09-06 00:44:53.284609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-06 00:44:53.284633 | orchestrator | directory 2025-09-06 00:44:53.284652 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:44:53.284667 | orchestrator | 2025-09-06 00:44:53.284682 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-06 00:44:53.284698 | orchestrator | Saturday 06 September 2025 00:42:35 +0000 (0:00:01.238) 0:00:25.368 **** 2025-09-06 00:44:53.284713 | orchestrator | [WARNING]: Skipped 2025-09-06 00:44:53.284730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-06 00:44:53.284757 | orchestrator | to this access issue: 2025-09-06 00:44:53.284773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-06 00:44:53.284790 | orchestrator | directory 2025-09-06 00:44:53.284800 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:44:53.284810 | orchestrator | 2025-09-06 00:44:53.284819 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-06 00:44:53.284829 | orchestrator | Saturday 06 September 2025 00:42:36 +0000 (0:00:00.963) 0:00:26.331 **** 2025-09-06 00:44:53.284839 | orchestrator | [WARNING]: Skipped 2025-09-06 00:44:53.284848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-06 00:44:53.284858 | orchestrator | to this access issue: 2025-09-06 00:44:53.284867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-06 00:44:53.284877 | orchestrator | directory 2025-09-06 00:44:53.284886 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:44:53.284896 | orchestrator | 2025-09-06 00:44:53.284905 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-06 00:44:53.284915 | orchestrator | Saturday 06 September 2025 00:42:37 +0000 (0:00:01.051) 0:00:27.382 **** 2025-09-06 00:44:53.284924 | orchestrator | [WARNING]: Skipped 2025-09-06 00:44:53.284934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-06 00:44:53.284943 | orchestrator | to this access issue: 2025-09-06 00:44:53.284952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-06 00:44:53.284961 | orchestrator | directory 2025-09-06 00:44:53.284971 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 00:44:53.284980 | orchestrator | 2025-09-06 00:44:53.284990 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-06 00:44:53.284999 | orchestrator | Saturday 06 September 2025 00:42:38 +0000 (0:00:00.698) 0:00:28.081 **** 2025-09-06 00:44:53.285008 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.285018 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.285027 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.285037 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.285046 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.285055 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.285065 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.285074 | orchestrator | 2025-09-06 00:44:53.285084 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-06 00:44:53.285093 | orchestrator | Saturday 06 September 2025 00:42:41 +0000 (0:00:02.707) 0:00:30.789 **** 2025-09-06 00:44:53.285103 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285122 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285149 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285159 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285168 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-06 00:44:53.285178 | orchestrator | 2025-09-06 00:44:53.285187 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-06 00:44:53.285197 | orchestrator | Saturday 06 September 2025 00:42:43 +0000 (0:00:02.548) 0:00:33.337 **** 2025-09-06 00:44:53.285206 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.285222 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.285232 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.285241 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.285251 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.285260 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.285269 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.285279 | orchestrator | 2025-09-06 00:44:53.285288 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-06 00:44:53.285298 | orchestrator | Saturday 06 September 2025 00:42:46 +0000 (0:00:02.904) 0:00:36.242 **** 2025-09-06 00:44:53.285308 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285354 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285390 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285400 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285414 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285429 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285449 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285459 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285491 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285519 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:44:53.285545 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285555 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285565 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285575 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285585 | orchestrator | 2025-09-06 00:44:53.285595 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-06 00:44:53.285604 | orchestrator | Saturday 06 September 2025 00:42:48 +0000 (0:00:02.320) 0:00:38.563 **** 2025-09-06 00:44:53.285620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285639 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285667 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285677 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285686 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-06 00:44:53.285695 | orchestrator | 2025-09-06 00:44:53.285705 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-06 00:44:53.285715 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:02.395) 0:00:40.959 **** 2025-09-06 00:44:53.285724 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285734 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285753 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285762 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285772 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285781 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-06 00:44:53.285790 | orchestrator | 2025-09-06 00:44:53.285800 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-06 00:44:53.285809 | orchestrator | Saturday 06 September 2025 00:42:54 +0000 (0:00:02.795) 0:00:43.755 **** 2025-09-06 00:44:53.285826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285847 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-06 00:44:53.285934 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.285996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286010 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:44:53.286108 | orchestrator | 2025-09-06 00:44:53.286124 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-06 00:44:53.286134 | orchestrator | Saturday 06 September 2025 00:42:57 +0000 (0:00:03.680) 0:00:47.436 **** 2025-09-06 00:44:53.286143 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.286153 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.286163 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.286172 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.286182 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.286191 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.286201 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.286210 | orchestrator | 2025-09-06 00:44:53.286220 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-06 00:44:53.286229 | orchestrator | Saturday 06 September 2025 00:42:59 +0000 (0:00:01.816) 0:00:49.253 **** 2025-09-06 00:44:53.286239 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.286248 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.286257 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.286267 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.286276 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.286286 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.286295 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.286304 | orchestrator | 2025-09-06 00:44:53.286314 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286323 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:01.705) 0:00:50.959 **** 2025-09-06 00:44:53.286333 | orchestrator | 2025-09-06 00:44:53.286342 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286352 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.061) 0:00:51.021 **** 2025-09-06 00:44:53.286362 | orchestrator | 2025-09-06 00:44:53.286371 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286381 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.054) 0:00:51.075 **** 2025-09-06 00:44:53.286390 | orchestrator | 2025-09-06 00:44:53.286400 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286409 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.059) 0:00:51.135 **** 2025-09-06 00:44:53.286418 | orchestrator | 2025-09-06 00:44:53.286428 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286437 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.229) 0:00:51.364 **** 2025-09-06 00:44:53.286452 | orchestrator | 2025-09-06 00:44:53.286467 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286477 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.049) 0:00:51.413 **** 2025-09-06 00:44:53.286486 | orchestrator | 2025-09-06 00:44:53.286496 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-06 00:44:53.286555 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.050) 0:00:51.464 **** 2025-09-06 00:44:53.286565 | orchestrator | 2025-09-06 00:44:53.286575 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-06 00:44:53.286585 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:00.070) 0:00:51.534 **** 2025-09-06 00:44:53.286594 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.286604 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.286614 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.286623 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.286633 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.286643 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.286652 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.286662 | orchestrator | 2025-09-06 00:44:53.286671 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-06 00:44:53.286681 | orchestrator | Saturday 06 September 2025 00:43:42 +0000 (0:00:40.822) 0:01:32.357 **** 2025-09-06 00:44:53.286691 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.286700 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.286710 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.286719 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.286729 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.286738 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.286748 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.286757 | orchestrator | 2025-09-06 00:44:53.286767 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-06 00:44:53.286777 | orchestrator | Saturday 06 September 2025 00:44:38 +0000 (0:00:56.128) 0:02:28.485 **** 2025-09-06 00:44:53.286786 | orchestrator | ok: [testbed-manager] 2025-09-06 00:44:53.286796 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:44:53.286805 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:44:53.286815 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:44:53.286824 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:44:53.286834 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:44:53.286843 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:44:53.286853 | orchestrator | 2025-09-06 00:44:53.286863 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-06 00:44:53.286872 | orchestrator | Saturday 06 September 2025 00:44:40 +0000 (0:00:01.986) 0:02:30.472 **** 2025-09-06 00:44:53.286882 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:44:53.286892 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:44:53.286901 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:44:53.286911 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:44:53.286920 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:44:53.286929 | orchestrator | changed: [testbed-manager] 2025-09-06 00:44:53.286939 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:44:53.286948 | orchestrator | 2025-09-06 00:44:53.286958 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:44:53.286969 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.286979 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.286994 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.287004 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.287021 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.287030 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.287040 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-06 00:44:53.287050 | orchestrator | 2025-09-06 00:44:53.287060 | orchestrator | 2025-09-06 00:44:53.287069 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:44:53.287077 | orchestrator | Saturday 06 September 2025 00:44:49 +0000 (0:00:09.112) 0:02:39.585 **** 2025-09-06 00:44:53.287085 | orchestrator | =============================================================================== 2025-09-06 00:44:53.287093 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 56.13s 2025-09-06 00:44:53.287101 | orchestrator | common : Restart fluentd container ------------------------------------- 40.82s 2025-09-06 00:44:53.287109 | orchestrator | common : Restart cron container ----------------------------------------- 9.11s 2025-09-06 00:44:53.287117 | orchestrator | common : Copying over config.json files for services -------------------- 5.45s 2025-09-06 00:44:53.287125 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.40s 2025-09-06 00:44:53.287132 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.72s 2025-09-06 00:44:53.287140 | orchestrator | common : Check common containers ---------------------------------------- 3.68s 2025-09-06 00:44:53.287152 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.90s 2025-09-06 00:44:53.287160 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.80s 2025-09-06 00:44:53.287168 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.79s 2025-09-06 00:44:53.287176 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.71s 2025-09-06 00:44:53.287183 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.55s 2025-09-06 00:44:53.287191 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.40s 2025-09-06 00:44:53.287199 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.32s 2025-09-06 00:44:53.287207 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.99s 2025-09-06 00:44:53.287215 | orchestrator | common : Creating log volume -------------------------------------------- 1.82s 2025-09-06 00:44:53.287222 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.71s 2025-09-06 00:44:53.287230 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.67s 2025-09-06 00:44:53.287238 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.37s 2025-09-06 00:44:53.287246 | orchestrator | common : include_tasks -------------------------------------------------- 1.25s 2025-09-06 00:44:53.287254 | orchestrator | 2025-09-06 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:56.310195 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:44:56.310293 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:56.310714 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:44:56.312034 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task c2362ba1-97a7-475c-842e-16de78d214e2 is in state STARTED 2025-09-06 00:44:56.312065 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:56.312691 | orchestrator | 2025-09-06 00:44:56 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:44:56.312716 | orchestrator | 2025-09-06 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:44:59.499397 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:44:59.500021 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:44:59.502459 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:44:59.508006 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task c2362ba1-97a7-475c-842e-16de78d214e2 is in state STARTED 2025-09-06 00:44:59.508038 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:44:59.509778 | orchestrator | 2025-09-06 00:44:59 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:44:59.509889 | orchestrator | 2025-09-06 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:02.548904 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:02.549594 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:02.550998 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:02.552137 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task c2362ba1-97a7-475c-842e-16de78d214e2 is in state STARTED 2025-09-06 00:45:02.552782 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:02.554196 | orchestrator | 2025-09-06 00:45:02 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:02.554239 | orchestrator | 2025-09-06 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:05.592682 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:05.593571 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:05.594312 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:05.595048 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task c2362ba1-97a7-475c-842e-16de78d214e2 is in state SUCCESS 2025-09-06 00:45:05.596157 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:05.596916 | orchestrator | 2025-09-06 00:45:05 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:05.597005 | orchestrator | 2025-09-06 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:08.634410 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:08.634530 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:08.634547 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:08.634560 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:08.635081 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:08.635812 | orchestrator | 2025-09-06 00:45:08 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:08.635833 | orchestrator | 2025-09-06 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:11.661433 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:11.662306 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:11.663315 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:11.666546 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:11.667171 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:11.667856 | orchestrator | 2025-09-06 00:45:11 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:11.668082 | orchestrator | 2025-09-06 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:14.689867 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:14.690356 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:14.695607 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:14.696071 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:14.696754 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:14.697175 | orchestrator | 2025-09-06 00:45:14 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:14.697311 | orchestrator | 2025-09-06 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:17.745433 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:17.745560 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:17.745862 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:17.746323 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:17.746430 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:17.747668 | orchestrator | 2025-09-06 00:45:17 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:17.747738 | orchestrator | 2025-09-06 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:20.775772 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:20.778690 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:20.781084 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:20.783278 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:20.785776 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:20.786868 | orchestrator | 2025-09-06 00:45:20 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:20.787209 | orchestrator | 2025-09-06 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:23.816140 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state STARTED 2025-09-06 00:45:23.816228 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:23.816705 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:23.818483 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:23.820533 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:23.821243 | orchestrator | 2025-09-06 00:45:23 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:23.821276 | orchestrator | 2025-09-06 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:27.219076 | orchestrator | 2025-09-06 00:45:27.219160 | orchestrator | 2025-09-06 00:45:27.219176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:45:27.219188 | orchestrator | 2025-09-06 00:45:27.219199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:45:27.219210 | orchestrator | Saturday 06 September 2025 00:44:54 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-06 00:45:27.219221 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:27.219232 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:27.219242 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:27.219253 | orchestrator | 2025-09-06 00:45:27.219264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:45:27.219274 | orchestrator | Saturday 06 September 2025 00:44:54 +0000 (0:00:00.261) 0:00:00.478 **** 2025-09-06 00:45:27.219285 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-06 00:45:27.219295 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-06 00:45:27.219306 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-06 00:45:27.219316 | orchestrator | 2025-09-06 00:45:27.219327 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-06 00:45:27.219337 | orchestrator | 2025-09-06 00:45:27.219348 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-06 00:45:27.219358 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.434) 0:00:00.912 **** 2025-09-06 00:45:27.219370 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:45:27.219381 | orchestrator | 2025-09-06 00:45:27.219393 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-06 00:45:27.219412 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.598) 0:00:01.511 **** 2025-09-06 00:45:27.219431 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-06 00:45:27.219497 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-06 00:45:27.219514 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-06 00:45:27.219525 | orchestrator | 2025-09-06 00:45:27.219535 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-06 00:45:27.219546 | orchestrator | Saturday 06 September 2025 00:44:56 +0000 (0:00:00.789) 0:00:02.301 **** 2025-09-06 00:45:27.219557 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-06 00:45:27.219568 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-06 00:45:27.219578 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-06 00:45:27.219589 | orchestrator | 2025-09-06 00:45:27.219600 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-06 00:45:27.219633 | orchestrator | Saturday 06 September 2025 00:44:58 +0000 (0:00:02.042) 0:00:04.343 **** 2025-09-06 00:45:27.219646 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:27.219659 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:27.219672 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:27.219684 | orchestrator | 2025-09-06 00:45:27.219697 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-06 00:45:27.219710 | orchestrator | Saturday 06 September 2025 00:45:01 +0000 (0:00:02.499) 0:00:06.843 **** 2025-09-06 00:45:27.219722 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:27.219735 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:27.219747 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:27.219760 | orchestrator | 2025-09-06 00:45:27.219772 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:45:27.219785 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.219799 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.219811 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.219823 | orchestrator | 2025-09-06 00:45:27.219836 | orchestrator | 2025-09-06 00:45:27.219848 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:45:27.219860 | orchestrator | Saturday 06 September 2025 00:45:04 +0000 (0:00:03.587) 0:00:10.430 **** 2025-09-06 00:45:27.219872 | orchestrator | =============================================================================== 2025-09-06 00:45:27.219898 | orchestrator | memcached : Restart memcached container --------------------------------- 3.59s 2025-09-06 00:45:27.219911 | orchestrator | memcached : Check memcached container ----------------------------------- 2.50s 2025-09-06 00:45:27.219923 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.04s 2025-09-06 00:45:27.219935 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.79s 2025-09-06 00:45:27.219948 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.60s 2025-09-06 00:45:27.219961 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-06 00:45:27.219973 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-06 00:45:27.219983 | orchestrator | 2025-09-06 00:45:27.219993 | orchestrator | 2025-09-06 00:45:27.220004 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:45:27.220015 | orchestrator | 2025-09-06 00:45:27.220025 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:45:27.220036 | orchestrator | Saturday 06 September 2025 00:44:54 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-06 00:45:27.220047 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:27.220058 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:27.220069 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:27.220079 | orchestrator | 2025-09-06 00:45:27.220090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:45:27.220117 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.421) 0:00:00.701 **** 2025-09-06 00:45:27.220129 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-06 00:45:27.220139 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-06 00:45:27.220150 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-06 00:45:27.220160 | orchestrator | 2025-09-06 00:45:27.220171 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-06 00:45:27.220182 | orchestrator | 2025-09-06 00:45:27.220192 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-06 00:45:27.220203 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.580) 0:00:01.282 **** 2025-09-06 00:45:27.220222 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:45:27.220233 | orchestrator | 2025-09-06 00:45:27.220243 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-06 00:45:27.220254 | orchestrator | Saturday 06 September 2025 00:44:56 +0000 (0:00:00.615) 0:00:01.897 **** 2025-09-06 00:45:27.220267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220364 | orchestrator | 2025-09-06 00:45:27.220375 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-06 00:45:27.220386 | orchestrator | Saturday 06 September 2025 00:44:57 +0000 (0:00:01.359) 0:00:03.257 **** 2025-09-06 00:45:27.220398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220558 | orchestrator | 2025-09-06 00:45:27.220571 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-06 00:45:27.220582 | orchestrator | Saturday 06 September 2025 00:45:01 +0000 (0:00:03.159) 0:00:06.417 **** 2025-09-06 00:45:27.220593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220673 | orchestrator | 2025-09-06 00:45:27.220690 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-06 00:45:27.220702 | orchestrator | Saturday 06 September 2025 00:45:03 +0000 (0:00:02.703) 0:00:09.120 **** 2025-09-06 00:45:27.220713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-06 00:45:27.220792 | orchestrator | 2025-09-06 00:45:27.220803 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-06 00:45:27.220814 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:01.626) 0:00:10.746 **** 2025-09-06 00:45:27.220825 | orchestrator | 2025-09-06 00:45:27.220837 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-06 00:45:27.220853 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:00.093) 0:00:10.840 **** 2025-09-06 00:45:27.220864 | orchestrator | 2025-09-06 00:45:27.220875 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-06 00:45:27.220886 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:00.117) 0:00:10.957 **** 2025-09-06 00:45:27.220896 | orchestrator | 2025-09-06 00:45:27.220907 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-06 00:45:27.220917 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:00.068) 0:00:11.026 **** 2025-09-06 00:45:27.220928 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:27.220939 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:27.220949 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:27.220960 | orchestrator | 2025-09-06 00:45:27.220971 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-06 00:45:27.220982 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:08.482) 0:00:19.508 **** 2025-09-06 00:45:27.220992 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:27.221003 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:27.221014 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:27.221024 | orchestrator | 2025-09-06 00:45:27.221035 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:45:27.221046 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.221058 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.221069 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:27.221079 | orchestrator | 2025-09-06 00:45:27.221090 | orchestrator | 2025-09-06 00:45:27.221101 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:45:27.221111 | orchestrator | Saturday 06 September 2025 00:45:24 +0000 (0:00:10.135) 0:00:29.644 **** 2025-09-06 00:45:27.221122 | orchestrator | =============================================================================== 2025-09-06 00:45:27.221133 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.14s 2025-09-06 00:45:27.221143 | orchestrator | redis : Restart redis container ----------------------------------------- 8.48s 2025-09-06 00:45:27.221154 | orchestrator | redis : Copying over default config.json files -------------------------- 3.16s 2025-09-06 00:45:27.221165 | orchestrator | redis : Copying over redis config files --------------------------------- 2.70s 2025-09-06 00:45:27.221176 | orchestrator | redis : Check redis containers ------------------------------------------ 1.63s 2025-09-06 00:45:27.221186 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2025-09-06 00:45:27.221197 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2025-09-06 00:45:27.221208 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-06 00:45:27.221218 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-09-06 00:45:27.221235 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2025-09-06 00:45:27.221246 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task f1c19c5f-8e91-4c3c-9b39-03d1484374ac is in state SUCCESS 2025-09-06 00:45:27.221257 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:27.221268 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:27.221279 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:27.221289 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:27.221300 | orchestrator | 2025-09-06 00:45:27 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:27.221311 | orchestrator | 2025-09-06 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:30.276388 | orchestrator | 2025-09-06 00:45:30 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:30.510538 | orchestrator | 2025-09-06 00:45:30 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:30.562552 | orchestrator | 2025-09-06 00:45:30 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:30.568642 | orchestrator | 2025-09-06 00:45:30 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:30.625074 | orchestrator | 2025-09-06 00:45:30 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:30.625130 | orchestrator | 2025-09-06 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:33.807500 | orchestrator | 2025-09-06 00:45:33 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:33.807581 | orchestrator | 2025-09-06 00:45:33 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:33.807596 | orchestrator | 2025-09-06 00:45:33 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:33.807608 | orchestrator | 2025-09-06 00:45:33 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:33.807619 | orchestrator | 2025-09-06 00:45:33 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:33.807630 | orchestrator | 2025-09-06 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:36.829548 | orchestrator | 2025-09-06 00:45:36 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state STARTED 2025-09-06 00:45:36.829635 | orchestrator | 2025-09-06 00:45:36 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:36.829649 | orchestrator | 2025-09-06 00:45:36 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:36.829680 | orchestrator | 2025-09-06 00:45:36 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:36.829692 | orchestrator | 2025-09-06 00:45:36 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:36.829703 | orchestrator | 2025-09-06 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:40.061411 | orchestrator | 2025-09-06 00:45:40.061577 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task d13e0df4-f779-48f7-9bef-9ce553fc96da is in state SUCCESS 2025-09-06 00:45:40.062475 | orchestrator | 2025-09-06 00:45:40.062514 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-06 00:45:40.062527 | orchestrator | 2025-09-06 00:45:40.062708 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-06 00:45:40.062726 | orchestrator | Saturday 06 September 2025 00:42:11 +0000 (0:00:00.172) 0:00:00.172 **** 2025-09-06 00:45:40.062738 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.062750 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.062761 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.062771 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.062782 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.062793 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.062803 | orchestrator | 2025-09-06 00:45:40.062815 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-06 00:45:40.062826 | orchestrator | Saturday 06 September 2025 00:42:11 +0000 (0:00:00.737) 0:00:00.910 **** 2025-09-06 00:45:40.062836 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.062848 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.062863 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.062875 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.062886 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.062896 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.062907 | orchestrator | 2025-09-06 00:45:40.062918 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-06 00:45:40.062929 | orchestrator | Saturday 06 September 2025 00:42:12 +0000 (0:00:00.512) 0:00:01.422 **** 2025-09-06 00:45:40.062940 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.062951 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.062962 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.062972 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.062983 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.062994 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.063005 | orchestrator | 2025-09-06 00:45:40.063015 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-06 00:45:40.063027 | orchestrator | Saturday 06 September 2025 00:42:12 +0000 (0:00:00.585) 0:00:02.008 **** 2025-09-06 00:45:40.063037 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.063048 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.063059 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.063069 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.063080 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.063091 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.063102 | orchestrator | 2025-09-06 00:45:40.063113 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-06 00:45:40.063124 | orchestrator | Saturday 06 September 2025 00:42:14 +0000 (0:00:02.133) 0:00:04.142 **** 2025-09-06 00:45:40.063147 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.063158 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.063169 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.063179 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.063190 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.063200 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.063211 | orchestrator | 2025-09-06 00:45:40.063222 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-06 00:45:40.063233 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:01.020) 0:00:05.162 **** 2025-09-06 00:45:40.063244 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.063255 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.063265 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.063276 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.063287 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.063297 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.063308 | orchestrator | 2025-09-06 00:45:40.063319 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-06 00:45:40.063332 | orchestrator | Saturday 06 September 2025 00:42:17 +0000 (0:00:01.250) 0:00:06.413 **** 2025-09-06 00:45:40.063354 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.063369 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.063381 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.063395 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.063408 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.063420 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.063452 | orchestrator | 2025-09-06 00:45:40.063465 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-06 00:45:40.063478 | orchestrator | Saturday 06 September 2025 00:42:18 +0000 (0:00:00.767) 0:00:07.181 **** 2025-09-06 00:45:40.063491 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.063503 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.063516 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.063528 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.063541 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.063553 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.063567 | orchestrator | 2025-09-06 00:45:40.063579 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-06 00:45:40.063593 | orchestrator | Saturday 06 September 2025 00:42:19 +0000 (0:00:01.028) 0:00:08.210 **** 2025-09-06 00:45:40.063606 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063619 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063646 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.063660 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063673 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063696 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.063709 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063721 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063732 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.063743 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063764 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063775 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.063786 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063797 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063808 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.063819 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 00:45:40.063958 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 00:45:40.063974 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.063985 | orchestrator | 2025-09-06 00:45:40.063996 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-06 00:45:40.064007 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:01.249) 0:00:09.459 **** 2025-09-06 00:45:40.064018 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.064029 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.064040 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.064051 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.064061 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.064072 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.064083 | orchestrator | 2025-09-06 00:45:40.064094 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-06 00:45:40.064106 | orchestrator | Saturday 06 September 2025 00:42:21 +0000 (0:00:01.261) 0:00:10.721 **** 2025-09-06 00:45:40.064117 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.064128 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.064148 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.064159 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.064169 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.064180 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.064191 | orchestrator | 2025-09-06 00:45:40.064202 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-06 00:45:40.064213 | orchestrator | Saturday 06 September 2025 00:42:22 +0000 (0:00:01.035) 0:00:11.756 **** 2025-09-06 00:45:40.064224 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.064234 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.064245 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.064256 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.064280 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.064291 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.064302 | orchestrator | 2025-09-06 00:45:40.064313 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-06 00:45:40.064330 | orchestrator | Saturday 06 September 2025 00:42:28 +0000 (0:00:05.973) 0:00:17.730 **** 2025-09-06 00:45:40.064341 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.064351 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.064454 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.064472 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.064483 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.064494 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.064504 | orchestrator | 2025-09-06 00:45:40.064515 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-06 00:45:40.064526 | orchestrator | Saturday 06 September 2025 00:42:29 +0000 (0:00:01.102) 0:00:18.833 **** 2025-09-06 00:45:40.064538 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.064548 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.064559 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.064570 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.064581 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.064591 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.064602 | orchestrator | 2025-09-06 00:45:40.064613 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-06 00:45:40.064626 | orchestrator | Saturday 06 September 2025 00:42:32 +0000 (0:00:02.489) 0:00:21.323 **** 2025-09-06 00:45:40.064636 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.064647 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.064658 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.064669 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.064679 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.064690 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.064701 | orchestrator | 2025-09-06 00:45:40.064712 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-06 00:45:40.064722 | orchestrator | Saturday 06 September 2025 00:42:32 +0000 (0:00:00.795) 0:00:22.118 **** 2025-09-06 00:45:40.064733 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-06 00:45:40.064745 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-06 00:45:40.064755 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-06 00:45:40.064766 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-06 00:45:40.064777 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-06 00:45:40.064788 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-06 00:45:40.064799 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-06 00:45:40.064809 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-06 00:45:40.064820 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-06 00:45:40.064831 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-06 00:45:40.064842 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-06 00:45:40.064860 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-06 00:45:40.064871 | orchestrator | 2025-09-06 00:45:40.064882 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-06 00:45:40.064893 | orchestrator | Saturday 06 September 2025 00:42:35 +0000 (0:00:02.062) 0:00:24.181 **** 2025-09-06 00:45:40.064904 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.064915 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.064926 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.064937 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.064947 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.064958 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.064969 | orchestrator | 2025-09-06 00:45:40.064989 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-06 00:45:40.065001 | orchestrator | 2025-09-06 00:45:40.065012 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-06 00:45:40.065023 | orchestrator | Saturday 06 September 2025 00:42:37 +0000 (0:00:02.174) 0:00:26.355 **** 2025-09-06 00:45:40.065047 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065059 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065072 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065084 | orchestrator | 2025-09-06 00:45:40.065097 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-06 00:45:40.065110 | orchestrator | Saturday 06 September 2025 00:42:38 +0000 (0:00:00.975) 0:00:27.331 **** 2025-09-06 00:45:40.065123 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065219 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065235 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065249 | orchestrator | 2025-09-06 00:45:40.065262 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-06 00:45:40.065274 | orchestrator | Saturday 06 September 2025 00:42:39 +0000 (0:00:00.997) 0:00:28.328 **** 2025-09-06 00:45:40.065286 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065298 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065311 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065324 | orchestrator | 2025-09-06 00:45:40.065337 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-06 00:45:40.065350 | orchestrator | Saturday 06 September 2025 00:42:41 +0000 (0:00:02.077) 0:00:30.405 **** 2025-09-06 00:45:40.065363 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065376 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065388 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065401 | orchestrator | 2025-09-06 00:45:40.065414 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-06 00:45:40.065470 | orchestrator | Saturday 06 September 2025 00:42:42 +0000 (0:00:01.011) 0:00:31.417 **** 2025-09-06 00:45:40.065483 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.065494 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.065505 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.065515 | orchestrator | 2025-09-06 00:45:40.065526 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-06 00:45:40.065537 | orchestrator | Saturday 06 September 2025 00:42:42 +0000 (0:00:00.286) 0:00:31.703 **** 2025-09-06 00:45:40.065548 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065559 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065569 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065580 | orchestrator | 2025-09-06 00:45:40.065591 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-06 00:45:40.065602 | orchestrator | Saturday 06 September 2025 00:42:43 +0000 (0:00:00.668) 0:00:32.372 **** 2025-09-06 00:45:40.065619 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.065630 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.065641 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.065652 | orchestrator | 2025-09-06 00:45:40.065663 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-06 00:45:40.065682 | orchestrator | Saturday 06 September 2025 00:42:45 +0000 (0:00:01.818) 0:00:34.190 **** 2025-09-06 00:45:40.065693 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:45:40.065705 | orchestrator | 2025-09-06 00:45:40.065715 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-06 00:45:40.065726 | orchestrator | Saturday 06 September 2025 00:42:45 +0000 (0:00:00.845) 0:00:35.036 **** 2025-09-06 00:45:40.065737 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.065748 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.065759 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.065770 | orchestrator | 2025-09-06 00:45:40.065780 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-06 00:45:40.065791 | orchestrator | Saturday 06 September 2025 00:42:47 +0000 (0:00:01.933) 0:00:36.969 **** 2025-09-06 00:45:40.065802 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.065813 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.065824 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.065834 | orchestrator | 2025-09-06 00:45:40.065845 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-06 00:45:40.065856 | orchestrator | Saturday 06 September 2025 00:42:48 +0000 (0:00:00.723) 0:00:37.693 **** 2025-09-06 00:45:40.065867 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.065878 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.065889 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.065899 | orchestrator | 2025-09-06 00:45:40.065910 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-06 00:45:40.065921 | orchestrator | Saturday 06 September 2025 00:42:49 +0000 (0:00:00.996) 0:00:38.689 **** 2025-09-06 00:45:40.065932 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.065943 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.065954 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.065964 | orchestrator | 2025-09-06 00:45:40.065975 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-06 00:45:40.065986 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:01.480) 0:00:40.170 **** 2025-09-06 00:45:40.065997 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.066008 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.066047 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.066060 | orchestrator | 2025-09-06 00:45:40.066071 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-06 00:45:40.066082 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:00.403) 0:00:40.573 **** 2025-09-06 00:45:40.066093 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.066104 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.066114 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.066125 | orchestrator | 2025-09-06 00:45:40.066136 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-06 00:45:40.066147 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:00.427) 0:00:41.001 **** 2025-09-06 00:45:40.066158 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.066169 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.066179 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.066190 | orchestrator | 2025-09-06 00:45:40.066210 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-06 00:45:40.066222 | orchestrator | Saturday 06 September 2025 00:42:53 +0000 (0:00:02.122) 0:00:43.123 **** 2025-09-06 00:45:40.066233 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-06 00:45:40.066245 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-06 00:45:40.066256 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-06 00:45:40.066273 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-06 00:45:40.066284 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-06 00:45:40.066295 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-06 00:45:40.066306 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-06 00:45:40.066316 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-06 00:45:40.066327 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-06 00:45:40.066338 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-06 00:45:40.066349 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-06 00:45:40.066364 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-06 00:45:40.066375 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.066386 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.066397 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.066408 | orchestrator | 2025-09-06 00:45:40.066419 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-06 00:45:40.066446 | orchestrator | Saturday 06 September 2025 00:43:38 +0000 (0:00:44.989) 0:01:28.113 **** 2025-09-06 00:45:40.066458 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.066468 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.066479 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.066490 | orchestrator | 2025-09-06 00:45:40.066500 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-06 00:45:40.066511 | orchestrator | Saturday 06 September 2025 00:43:39 +0000 (0:00:00.293) 0:01:28.406 **** 2025-09-06 00:45:40.066522 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.066533 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.066544 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.066554 | orchestrator | 2025-09-06 00:45:40.066565 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-06 00:45:40.066576 | orchestrator | Saturday 06 September 2025 00:43:40 +0000 (0:00:01.138) 0:01:29.545 **** 2025-09-06 00:45:40.066587 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.066598 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.066608 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.066619 | orchestrator | 2025-09-06 00:45:40.066630 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-06 00:45:40.066641 | orchestrator | Saturday 06 September 2025 00:43:41 +0000 (0:00:01.537) 0:01:31.082 **** 2025-09-06 00:45:40.066652 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.066663 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.066673 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.066684 | orchestrator | 2025-09-06 00:45:40.066695 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-06 00:45:40.066706 | orchestrator | Saturday 06 September 2025 00:44:05 +0000 (0:00:23.911) 0:01:54.994 **** 2025-09-06 00:45:40.066717 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.066727 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.066738 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.066749 | orchestrator | 2025-09-06 00:45:40.066760 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-06 00:45:40.066777 | orchestrator | Saturday 06 September 2025 00:44:06 +0000 (0:00:00.740) 0:01:55.735 **** 2025-09-06 00:45:40.066788 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.066798 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.066809 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.066820 | orchestrator | 2025-09-06 00:45:40.066831 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-06 00:45:40.066842 | orchestrator | Saturday 06 September 2025 00:44:07 +0000 (0:00:00.670) 0:01:56.406 **** 2025-09-06 00:45:40.066852 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.066863 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.066874 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.066885 | orchestrator | 2025-09-06 00:45:40.066896 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-06 00:45:40.066907 | orchestrator | Saturday 06 September 2025 00:44:07 +0000 (0:00:00.707) 0:01:57.114 **** 2025-09-06 00:45:40.066917 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.066934 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.066945 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.066956 | orchestrator | 2025-09-06 00:45:40.066967 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-06 00:45:40.066978 | orchestrator | Saturday 06 September 2025 00:44:09 +0000 (0:00:01.151) 0:01:58.265 **** 2025-09-06 00:45:40.066989 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.066999 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.067010 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.067021 | orchestrator | 2025-09-06 00:45:40.067032 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-06 00:45:40.067043 | orchestrator | Saturday 06 September 2025 00:44:09 +0000 (0:00:00.334) 0:01:58.600 **** 2025-09-06 00:45:40.067054 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.067065 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.067075 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.067086 | orchestrator | 2025-09-06 00:45:40.067097 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-06 00:45:40.067108 | orchestrator | Saturday 06 September 2025 00:44:10 +0000 (0:00:00.808) 0:01:59.408 **** 2025-09-06 00:45:40.067119 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.067129 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.067140 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.067151 | orchestrator | 2025-09-06 00:45:40.067161 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-06 00:45:40.067172 | orchestrator | Saturday 06 September 2025 00:44:10 +0000 (0:00:00.603) 0:02:00.012 **** 2025-09-06 00:45:40.067183 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.067194 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.067205 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.067215 | orchestrator | 2025-09-06 00:45:40.067226 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-06 00:45:40.067237 | orchestrator | Saturday 06 September 2025 00:44:12 +0000 (0:00:01.159) 0:02:01.171 **** 2025-09-06 00:45:40.067248 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:45:40.067259 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:45:40.067270 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:45:40.067281 | orchestrator | 2025-09-06 00:45:40.067291 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-06 00:45:40.067302 | orchestrator | Saturday 06 September 2025 00:44:12 +0000 (0:00:00.875) 0:02:02.047 **** 2025-09-06 00:45:40.067313 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.067324 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.067334 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.067345 | orchestrator | 2025-09-06 00:45:40.067356 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-06 00:45:40.067371 | orchestrator | Saturday 06 September 2025 00:44:13 +0000 (0:00:00.285) 0:02:02.333 **** 2025-09-06 00:45:40.067396 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.067407 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.067418 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.067442 | orchestrator | 2025-09-06 00:45:40.067453 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-06 00:45:40.067464 | orchestrator | Saturday 06 September 2025 00:44:13 +0000 (0:00:00.336) 0:02:02.670 **** 2025-09-06 00:45:40.067475 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.067486 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.067496 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.067507 | orchestrator | 2025-09-06 00:45:40.067518 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-06 00:45:40.067529 | orchestrator | Saturday 06 September 2025 00:44:14 +0000 (0:00:00.942) 0:02:03.612 **** 2025-09-06 00:45:40.067540 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.067551 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.067562 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.067572 | orchestrator | 2025-09-06 00:45:40.067583 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-06 00:45:40.067594 | orchestrator | Saturday 06 September 2025 00:44:15 +0000 (0:00:00.662) 0:02:04.274 **** 2025-09-06 00:45:40.067605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-06 00:45:40.067616 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-06 00:45:40.067627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-06 00:45:40.067638 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-06 00:45:40.067649 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-06 00:45:40.067659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-06 00:45:40.067670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-06 00:45:40.067681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-06 00:45:40.067692 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-06 00:45:40.067703 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-06 00:45:40.067713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-06 00:45:40.067724 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-06 00:45:40.067735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-06 00:45:40.067751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-06 00:45:40.067763 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-06 00:45:40.067774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-06 00:45:40.067785 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-06 00:45:40.067795 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-06 00:45:40.067806 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-06 00:45:40.067817 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-06 00:45:40.067828 | orchestrator | 2025-09-06 00:45:40.067839 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-06 00:45:40.067855 | orchestrator | 2025-09-06 00:45:40.067866 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-06 00:45:40.067877 | orchestrator | Saturday 06 September 2025 00:44:18 +0000 (0:00:03.529) 0:02:07.804 **** 2025-09-06 00:45:40.067888 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.067899 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.067910 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.067921 | orchestrator | 2025-09-06 00:45:40.067932 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-06 00:45:40.067943 | orchestrator | Saturday 06 September 2025 00:44:19 +0000 (0:00:00.565) 0:02:08.369 **** 2025-09-06 00:45:40.067953 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.067964 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.067975 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.067986 | orchestrator | 2025-09-06 00:45:40.067996 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-06 00:45:40.068007 | orchestrator | Saturday 06 September 2025 00:44:20 +0000 (0:00:01.523) 0:02:09.892 **** 2025-09-06 00:45:40.068018 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.068028 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.068039 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.068050 | orchestrator | 2025-09-06 00:45:40.068061 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-06 00:45:40.068072 | orchestrator | Saturday 06 September 2025 00:44:21 +0000 (0:00:00.323) 0:02:10.216 **** 2025-09-06 00:45:40.068082 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:45:40.068093 | orchestrator | 2025-09-06 00:45:40.068109 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-06 00:45:40.068120 | orchestrator | Saturday 06 September 2025 00:44:21 +0000 (0:00:00.602) 0:02:10.818 **** 2025-09-06 00:45:40.068131 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.068142 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.068153 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.068164 | orchestrator | 2025-09-06 00:45:40.068175 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-06 00:45:40.068185 | orchestrator | Saturday 06 September 2025 00:44:21 +0000 (0:00:00.330) 0:02:11.149 **** 2025-09-06 00:45:40.068196 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.068207 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.068218 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.068228 | orchestrator | 2025-09-06 00:45:40.068239 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-06 00:45:40.068250 | orchestrator | Saturday 06 September 2025 00:44:22 +0000 (0:00:00.315) 0:02:11.464 **** 2025-09-06 00:45:40.068261 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.068272 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.068282 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.068293 | orchestrator | 2025-09-06 00:45:40.068304 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-06 00:45:40.068314 | orchestrator | Saturday 06 September 2025 00:44:22 +0000 (0:00:00.342) 0:02:11.806 **** 2025-09-06 00:45:40.068325 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.068336 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.068347 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.068357 | orchestrator | 2025-09-06 00:45:40.068368 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-06 00:45:40.068379 | orchestrator | Saturday 06 September 2025 00:44:23 +0000 (0:00:00.908) 0:02:12.715 **** 2025-09-06 00:45:40.068390 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.068400 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.068411 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.068422 | orchestrator | 2025-09-06 00:45:40.068478 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-06 00:45:40.068496 | orchestrator | Saturday 06 September 2025 00:44:24 +0000 (0:00:01.284) 0:02:14.000 **** 2025-09-06 00:45:40.068507 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.068518 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.068529 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.068540 | orchestrator | 2025-09-06 00:45:40.068551 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-06 00:45:40.068562 | orchestrator | Saturday 06 September 2025 00:44:26 +0000 (0:00:01.352) 0:02:15.352 **** 2025-09-06 00:45:40.068573 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:45:40.068584 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:45:40.068595 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:45:40.068606 | orchestrator | 2025-09-06 00:45:40.068617 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-06 00:45:40.068627 | orchestrator | 2025-09-06 00:45:40.068638 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-06 00:45:40.068649 | orchestrator | Saturday 06 September 2025 00:44:37 +0000 (0:00:11.467) 0:02:26.820 **** 2025-09-06 00:45:40.068660 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.068671 | orchestrator | 2025-09-06 00:45:40.068682 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-06 00:45:40.068693 | orchestrator | Saturday 06 September 2025 00:44:38 +0000 (0:00:00.824) 0:02:27.644 **** 2025-09-06 00:45:40.068710 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.068722 | orchestrator | 2025-09-06 00:45:40.068733 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-06 00:45:40.068744 | orchestrator | Saturday 06 September 2025 00:44:38 +0000 (0:00:00.405) 0:02:28.049 **** 2025-09-06 00:45:40.068755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-06 00:45:40.068765 | orchestrator | 2025-09-06 00:45:40.068776 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-06 00:45:40.068787 | orchestrator | Saturday 06 September 2025 00:44:39 +0000 (0:00:00.582) 0:02:28.632 **** 2025-09-06 00:45:40.068798 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.068807 | orchestrator | 2025-09-06 00:45:40.068817 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-06 00:45:40.068826 | orchestrator | Saturday 06 September 2025 00:44:40 +0000 (0:00:00.813) 0:02:29.446 **** 2025-09-06 00:45:40.068836 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.068845 | orchestrator | 2025-09-06 00:45:40.068855 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-06 00:45:40.068865 | orchestrator | Saturday 06 September 2025 00:44:40 +0000 (0:00:00.578) 0:02:30.024 **** 2025-09-06 00:45:40.068875 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-06 00:45:40.068884 | orchestrator | 2025-09-06 00:45:40.068894 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-06 00:45:40.068904 | orchestrator | Saturday 06 September 2025 00:44:42 +0000 (0:00:01.638) 0:02:31.663 **** 2025-09-06 00:45:40.068913 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-06 00:45:40.068923 | orchestrator | 2025-09-06 00:45:40.068932 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-06 00:45:40.068942 | orchestrator | Saturday 06 September 2025 00:44:43 +0000 (0:00:00.868) 0:02:32.531 **** 2025-09-06 00:45:40.068952 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.068961 | orchestrator | 2025-09-06 00:45:40.068971 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-06 00:45:40.068981 | orchestrator | Saturday 06 September 2025 00:44:43 +0000 (0:00:00.399) 0:02:32.931 **** 2025-09-06 00:45:40.068990 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.069000 | orchestrator | 2025-09-06 00:45:40.069009 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-06 00:45:40.069019 | orchestrator | 2025-09-06 00:45:40.069028 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-06 00:45:40.069043 | orchestrator | Saturday 06 September 2025 00:44:44 +0000 (0:00:00.592) 0:02:33.523 **** 2025-09-06 00:45:40.069053 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.069063 | orchestrator | 2025-09-06 00:45:40.069076 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-06 00:45:40.069086 | orchestrator | Saturday 06 September 2025 00:44:44 +0000 (0:00:00.140) 0:02:33.664 **** 2025-09-06 00:45:40.069096 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:45:40.069106 | orchestrator | 2025-09-06 00:45:40.069115 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-06 00:45:40.069125 | orchestrator | Saturday 06 September 2025 00:44:44 +0000 (0:00:00.209) 0:02:33.873 **** 2025-09-06 00:45:40.069134 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.069144 | orchestrator | 2025-09-06 00:45:40.069154 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-06 00:45:40.069163 | orchestrator | Saturday 06 September 2025 00:44:45 +0000 (0:00:00.813) 0:02:34.687 **** 2025-09-06 00:45:40.069173 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.069183 | orchestrator | 2025-09-06 00:45:40.069192 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-06 00:45:40.069202 | orchestrator | Saturday 06 September 2025 00:44:47 +0000 (0:00:01.484) 0:02:36.171 **** 2025-09-06 00:45:40.069212 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.069221 | orchestrator | 2025-09-06 00:45:40.069231 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-06 00:45:40.069240 | orchestrator | Saturday 06 September 2025 00:44:47 +0000 (0:00:00.782) 0:02:36.953 **** 2025-09-06 00:45:40.069250 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.069259 | orchestrator | 2025-09-06 00:45:40.069269 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-06 00:45:40.069279 | orchestrator | Saturday 06 September 2025 00:44:48 +0000 (0:00:00.406) 0:02:37.360 **** 2025-09-06 00:45:40.069288 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.069298 | orchestrator | 2025-09-06 00:45:40.069307 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-06 00:45:40.069317 | orchestrator | Saturday 06 September 2025 00:44:56 +0000 (0:00:08.451) 0:02:45.811 **** 2025-09-06 00:45:40.069326 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.069336 | orchestrator | 2025-09-06 00:45:40.069345 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-06 00:45:40.069355 | orchestrator | Saturday 06 September 2025 00:45:09 +0000 (0:00:12.454) 0:02:58.266 **** 2025-09-06 00:45:40.069365 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.069374 | orchestrator | 2025-09-06 00:45:40.069384 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-06 00:45:40.069393 | orchestrator | 2025-09-06 00:45:40.069403 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-06 00:45:40.069413 | orchestrator | Saturday 06 September 2025 00:45:09 +0000 (0:00:00.426) 0:02:58.692 **** 2025-09-06 00:45:40.069435 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.069445 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.069455 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.069464 | orchestrator | 2025-09-06 00:45:40.069474 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-06 00:45:40.069483 | orchestrator | Saturday 06 September 2025 00:45:09 +0000 (0:00:00.302) 0:02:58.995 **** 2025-09-06 00:45:40.069493 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069503 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.069512 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.069521 | orchestrator | 2025-09-06 00:45:40.069536 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-06 00:45:40.069546 | orchestrator | Saturday 06 September 2025 00:45:10 +0000 (0:00:00.463) 0:02:59.459 **** 2025-09-06 00:45:40.069555 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:45:40.069573 | orchestrator | 2025-09-06 00:45:40.069583 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-06 00:45:40.069592 | orchestrator | Saturday 06 September 2025 00:45:10 +0000 (0:00:00.675) 0:03:00.134 **** 2025-09-06 00:45:40.069602 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069611 | orchestrator | 2025-09-06 00:45:40.069621 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-06 00:45:40.069630 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.200) 0:03:00.335 **** 2025-09-06 00:45:40.069640 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069649 | orchestrator | 2025-09-06 00:45:40.069659 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-06 00:45:40.069668 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.195) 0:03:00.530 **** 2025-09-06 00:45:40.069678 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069687 | orchestrator | 2025-09-06 00:45:40.069697 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-06 00:45:40.069706 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.210) 0:03:00.740 **** 2025-09-06 00:45:40.069716 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069725 | orchestrator | 2025-09-06 00:45:40.069735 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-06 00:45:40.069744 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.161) 0:03:00.902 **** 2025-09-06 00:45:40.069754 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069763 | orchestrator | 2025-09-06 00:45:40.069773 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-06 00:45:40.069782 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.182) 0:03:01.084 **** 2025-09-06 00:45:40.069792 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069802 | orchestrator | 2025-09-06 00:45:40.069811 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-06 00:45:40.069821 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:00.151) 0:03:01.235 **** 2025-09-06 00:45:40.069830 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069840 | orchestrator | 2025-09-06 00:45:40.069849 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-06 00:45:40.069863 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:00.166) 0:03:01.401 **** 2025-09-06 00:45:40.069872 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069882 | orchestrator | 2025-09-06 00:45:40.069892 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-06 00:45:40.069901 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:00.183) 0:03:01.585 **** 2025-09-06 00:45:40.069910 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069920 | orchestrator | 2025-09-06 00:45:40.069930 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-06 00:45:40.069939 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:00.164) 0:03:01.749 **** 2025-09-06 00:45:40.069949 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-06 00:45:40.069959 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-06 00:45:40.069968 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.069978 | orchestrator | 2025-09-06 00:45:40.069987 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-06 00:45:40.069997 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.511) 0:03:02.261 **** 2025-09-06 00:45:40.070006 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070036 | orchestrator | 2025-09-06 00:45:40.070048 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-06 00:45:40.070057 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.181) 0:03:02.443 **** 2025-09-06 00:45:40.070067 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070082 | orchestrator | 2025-09-06 00:45:40.070092 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-06 00:45:40.070102 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.178) 0:03:02.621 **** 2025-09-06 00:45:40.070111 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070121 | orchestrator | 2025-09-06 00:45:40.070130 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-06 00:45:40.070140 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.187) 0:03:02.809 **** 2025-09-06 00:45:40.070149 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070159 | orchestrator | 2025-09-06 00:45:40.070169 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-06 00:45:40.070178 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.182) 0:03:02.992 **** 2025-09-06 00:45:40.070188 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070198 | orchestrator | 2025-09-06 00:45:40.070207 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-06 00:45:40.070217 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.164) 0:03:03.157 **** 2025-09-06 00:45:40.070226 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070236 | orchestrator | 2025-09-06 00:45:40.070245 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-06 00:45:40.070255 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.158) 0:03:03.315 **** 2025-09-06 00:45:40.070265 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070274 | orchestrator | 2025-09-06 00:45:40.070284 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-06 00:45:40.070294 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.239) 0:03:03.555 **** 2025-09-06 00:45:40.070303 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070313 | orchestrator | 2025-09-06 00:45:40.070323 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-06 00:45:40.070337 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.158) 0:03:03.714 **** 2025-09-06 00:45:40.070347 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070357 | orchestrator | 2025-09-06 00:45:40.070367 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-06 00:45:40.070377 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.145) 0:03:03.859 **** 2025-09-06 00:45:40.070386 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070396 | orchestrator | 2025-09-06 00:45:40.070405 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-06 00:45:40.070415 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.151) 0:03:04.011 **** 2025-09-06 00:45:40.070437 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070448 | orchestrator | 2025-09-06 00:45:40.070457 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-06 00:45:40.070467 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.133) 0:03:04.145 **** 2025-09-06 00:45:40.070476 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-06 00:45:40.070486 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-06 00:45:40.070496 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-06 00:45:40.070505 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-06 00:45:40.070515 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070524 | orchestrator | 2025-09-06 00:45:40.070534 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-06 00:45:40.070544 | orchestrator | Saturday 06 September 2025 00:45:15 +0000 (0:00:00.684) 0:03:04.830 **** 2025-09-06 00:45:40.070554 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070563 | orchestrator | 2025-09-06 00:45:40.070573 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-06 00:45:40.070582 | orchestrator | Saturday 06 September 2025 00:45:15 +0000 (0:00:00.162) 0:03:04.993 **** 2025-09-06 00:45:40.070597 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070607 | orchestrator | 2025-09-06 00:45:40.070616 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-06 00:45:40.070626 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:00.187) 0:03:05.180 **** 2025-09-06 00:45:40.070636 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070645 | orchestrator | 2025-09-06 00:45:40.070655 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-06 00:45:40.070665 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:00.170) 0:03:05.351 **** 2025-09-06 00:45:40.070674 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070684 | orchestrator | 2025-09-06 00:45:40.070698 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-06 00:45:40.070708 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:00.151) 0:03:05.502 **** 2025-09-06 00:45:40.070717 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-06 00:45:40.070727 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-06 00:45:40.070737 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070746 | orchestrator | 2025-09-06 00:45:40.070756 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-06 00:45:40.070766 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:00.242) 0:03:05.745 **** 2025-09-06 00:45:40.070775 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.070785 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.070794 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.070804 | orchestrator | 2025-09-06 00:45:40.070814 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-06 00:45:40.070823 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:00.241) 0:03:05.987 **** 2025-09-06 00:45:40.070833 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.070843 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.070852 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.070862 | orchestrator | 2025-09-06 00:45:40.070871 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-06 00:45:40.070881 | orchestrator | 2025-09-06 00:45:40.070890 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-06 00:45:40.070900 | orchestrator | Saturday 06 September 2025 00:45:17 +0000 (0:00:00.963) 0:03:06.950 **** 2025-09-06 00:45:40.070910 | orchestrator | ok: [testbed-manager] 2025-09-06 00:45:40.070919 | orchestrator | 2025-09-06 00:45:40.070928 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-06 00:45:40.070938 | orchestrator | Saturday 06 September 2025 00:45:17 +0000 (0:00:00.118) 0:03:07.069 **** 2025-09-06 00:45:40.070948 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-06 00:45:40.070957 | orchestrator | 2025-09-06 00:45:40.070967 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-06 00:45:40.070976 | orchestrator | Saturday 06 September 2025 00:45:18 +0000 (0:00:00.186) 0:03:07.256 **** 2025-09-06 00:45:40.070986 | orchestrator | changed: [testbed-manager] 2025-09-06 00:45:40.070995 | orchestrator | 2025-09-06 00:45:40.071005 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-06 00:45:40.071014 | orchestrator | 2025-09-06 00:45:40.071024 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-06 00:45:40.071033 | orchestrator | Saturday 06 September 2025 00:45:23 +0000 (0:00:04.984) 0:03:12.240 **** 2025-09-06 00:45:40.071043 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:45:40.071052 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:45:40.071062 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:45:40.071071 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:45:40.071081 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:45:40.071090 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:45:40.071109 | orchestrator | 2025-09-06 00:45:40.071119 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-06 00:45:40.071129 | orchestrator | Saturday 06 September 2025 00:45:23 +0000 (0:00:00.698) 0:03:12.939 **** 2025-09-06 00:45:40.071143 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-06 00:45:40.071153 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-06 00:45:40.071163 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-06 00:45:40.071172 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-06 00:45:40.071182 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-06 00:45:40.071192 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-06 00:45:40.071201 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-06 00:45:40.071211 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-06 00:45:40.071220 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-06 00:45:40.071230 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-06 00:45:40.071239 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-06 00:45:40.071249 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-06 00:45:40.071258 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-06 00:45:40.071268 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-06 00:45:40.071277 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-06 00:45:40.071287 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-06 00:45:40.071296 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-06 00:45:40.071306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-06 00:45:40.071315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-06 00:45:40.071329 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-06 00:45:40.071338 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-06 00:45:40.071348 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-06 00:45:40.071358 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-06 00:45:40.071367 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-06 00:45:40.071377 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-06 00:45:40.071386 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-06 00:45:40.071395 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-06 00:45:40.071405 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-06 00:45:40.071414 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-06 00:45:40.071434 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-06 00:45:40.071445 | orchestrator | 2025-09-06 00:45:40.071454 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-06 00:45:40.071464 | orchestrator | Saturday 06 September 2025 00:45:35 +0000 (0:00:11.964) 0:03:24.904 **** 2025-09-06 00:45:40.071480 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.071490 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.071500 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.071509 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.071519 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.071528 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.071538 | orchestrator | 2025-09-06 00:45:40.071547 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-06 00:45:40.071557 | orchestrator | Saturday 06 September 2025 00:45:36 +0000 (0:00:00.695) 0:03:25.599 **** 2025-09-06 00:45:40.071567 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:45:40.071576 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:45:40.071586 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:45:40.071596 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:45:40.071605 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:45:40.071614 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:45:40.071624 | orchestrator | 2025-09-06 00:45:40.071633 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:45:40.071643 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:45:40.071654 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-06 00:45:40.071664 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-06 00:45:40.071678 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-06 00:45:40.071689 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-06 00:45:40.071699 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-06 00:45:40.071708 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-06 00:45:40.071718 | orchestrator | 2025-09-06 00:45:40.071727 | orchestrator | 2025-09-06 00:45:40.071737 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:45:40.071747 | orchestrator | Saturday 06 September 2025 00:45:36 +0000 (0:00:00.492) 0:03:26.092 **** 2025-09-06 00:45:40.071756 | orchestrator | =============================================================================== 2025-09-06 00:45:40.071766 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.99s 2025-09-06 00:45:40.071776 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.91s 2025-09-06 00:45:40.071785 | orchestrator | kubectl : Install required packages ------------------------------------ 12.45s 2025-09-06 00:45:40.071795 | orchestrator | Manage labels ---------------------------------------------------------- 11.96s 2025-09-06 00:45:40.071804 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.47s 2025-09-06 00:45:40.071814 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.45s 2025-09-06 00:45:40.071823 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.97s 2025-09-06 00:45:40.071833 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.98s 2025-09-06 00:45:40.071842 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.53s 2025-09-06 00:45:40.071852 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.49s 2025-09-06 00:45:40.071867 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.17s 2025-09-06 00:45:40.071877 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.13s 2025-09-06 00:45:40.071886 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.12s 2025-09-06 00:45:40.071896 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 2.08s 2025-09-06 00:45:40.071905 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.06s 2025-09-06 00:45:40.071915 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.93s 2025-09-06 00:45:40.071934 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.82s 2025-09-06 00:45:40.071944 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2025-09-06 00:45:40.071954 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.54s 2025-09-06 00:45:40.071963 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.52s 2025-09-06 00:45:40.071973 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:40.071983 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:40.071993 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task 59c2ee79-0fa2-46aa-adbb-ada1545e01f2 is in state STARTED 2025-09-06 00:45:40.072003 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task 43bbd070-ada1-4026-b93b-8bf883a9a077 is in state STARTED 2025-09-06 00:45:40.072012 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:40.072022 | orchestrator | 2025-09-06 00:45:40 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:40.072032 | orchestrator | 2025-09-06 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:43.222569 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:43.222905 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:43.223638 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task 59c2ee79-0fa2-46aa-adbb-ada1545e01f2 is in state STARTED 2025-09-06 00:45:43.225058 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task 43bbd070-ada1-4026-b93b-8bf883a9a077 is in state STARTED 2025-09-06 00:45:43.226287 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:43.228245 | orchestrator | 2025-09-06 00:45:43 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:43.228269 | orchestrator | 2025-09-06 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:46.270112 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:46.272965 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:46.273174 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task 59c2ee79-0fa2-46aa-adbb-ada1545e01f2 is in state STARTED 2025-09-06 00:45:46.273956 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task 43bbd070-ada1-4026-b93b-8bf883a9a077 is in state SUCCESS 2025-09-06 00:45:46.275276 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:46.277089 | orchestrator | 2025-09-06 00:45:46 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:46.277112 | orchestrator | 2025-09-06 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:49.298297 | orchestrator | 2025-09-06 00:45:49 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:49.298383 | orchestrator | 2025-09-06 00:45:49 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:49.298753 | orchestrator | 2025-09-06 00:45:49 | INFO  | Task 59c2ee79-0fa2-46aa-adbb-ada1545e01f2 is in state SUCCESS 2025-09-06 00:45:49.299537 | orchestrator | 2025-09-06 00:45:49 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:49.300472 | orchestrator | 2025-09-06 00:45:49 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:49.300608 | orchestrator | 2025-09-06 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:52.335314 | orchestrator | 2025-09-06 00:45:52 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:52.335649 | orchestrator | 2025-09-06 00:45:52 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:52.338499 | orchestrator | 2025-09-06 00:45:52 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:52.339262 | orchestrator | 2025-09-06 00:45:52 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:52.339312 | orchestrator | 2025-09-06 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:55.376722 | orchestrator | 2025-09-06 00:45:55 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:55.377852 | orchestrator | 2025-09-06 00:45:55 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:55.379077 | orchestrator | 2025-09-06 00:45:55 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:55.379962 | orchestrator | 2025-09-06 00:45:55 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:55.379987 | orchestrator | 2025-09-06 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:45:58.436357 | orchestrator | 2025-09-06 00:45:58 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:45:58.437585 | orchestrator | 2025-09-06 00:45:58 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:45:58.439328 | orchestrator | 2025-09-06 00:45:58 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:45:58.440940 | orchestrator | 2025-09-06 00:45:58 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:45:58.441070 | orchestrator | 2025-09-06 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:01.545916 | orchestrator | 2025-09-06 00:46:01 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state STARTED 2025-09-06 00:46:01.546090 | orchestrator | 2025-09-06 00:46:01 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:01.548066 | orchestrator | 2025-09-06 00:46:01 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:01.553624 | orchestrator | 2025-09-06 00:46:01 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:01.553661 | orchestrator | 2025-09-06 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:04.599960 | orchestrator | 2025-09-06 00:46:04 | INFO  | Task cb6d68a2-1a4a-4cc6-8be5-9a5951a8d2f5 is in state SUCCESS 2025-09-06 00:46:04.601017 | orchestrator | 2025-09-06 00:46:04.601203 | orchestrator | 2025-09-06 00:46:04.601220 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-06 00:46:04.601281 | orchestrator | 2025-09-06 00:46:04.601294 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-06 00:46:04.601306 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:00.153) 0:00:00.153 **** 2025-09-06 00:46:04.601317 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-06 00:46:04.601329 | orchestrator | 2025-09-06 00:46:04.601340 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-06 00:46:04.601351 | orchestrator | Saturday 06 September 2025 00:45:42 +0000 (0:00:00.780) 0:00:00.934 **** 2025-09-06 00:46:04.601362 | orchestrator | changed: [testbed-manager] 2025-09-06 00:46:04.601374 | orchestrator | 2025-09-06 00:46:04.601385 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-06 00:46:04.601428 | orchestrator | Saturday 06 September 2025 00:45:43 +0000 (0:00:01.020) 0:00:01.954 **** 2025-09-06 00:46:04.601439 | orchestrator | changed: [testbed-manager] 2025-09-06 00:46:04.601450 | orchestrator | 2025-09-06 00:46:04.601461 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:46:04.601473 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:46:04.601486 | orchestrator | 2025-09-06 00:46:04.601497 | orchestrator | 2025-09-06 00:46:04.601508 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:46:04.601519 | orchestrator | Saturday 06 September 2025 00:45:43 +0000 (0:00:00.389) 0:00:02.344 **** 2025-09-06 00:46:04.601530 | orchestrator | =============================================================================== 2025-09-06 00:46:04.601540 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-09-06 00:46:04.601551 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-09-06 00:46:04.601562 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-09-06 00:46:04.601573 | orchestrator | 2025-09-06 00:46:04.601584 | orchestrator | 2025-09-06 00:46:04.601595 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-06 00:46:04.601605 | orchestrator | 2025-09-06 00:46:04.601616 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-06 00:46:04.601627 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:00.210) 0:00:00.210 **** 2025-09-06 00:46:04.601638 | orchestrator | ok: [testbed-manager] 2025-09-06 00:46:04.601650 | orchestrator | 2025-09-06 00:46:04.601661 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-06 00:46:04.601672 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:00.654) 0:00:00.865 **** 2025-09-06 00:46:04.601697 | orchestrator | ok: [testbed-manager] 2025-09-06 00:46:04.601709 | orchestrator | 2025-09-06 00:46:04.601720 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-06 00:46:04.601731 | orchestrator | Saturday 06 September 2025 00:45:42 +0000 (0:00:00.455) 0:00:01.321 **** 2025-09-06 00:46:04.601742 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-06 00:46:04.601753 | orchestrator | 2025-09-06 00:46:04.601764 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-06 00:46:04.601774 | orchestrator | Saturday 06 September 2025 00:45:43 +0000 (0:00:00.748) 0:00:02.069 **** 2025-09-06 00:46:04.601785 | orchestrator | changed: [testbed-manager] 2025-09-06 00:46:04.601796 | orchestrator | 2025-09-06 00:46:04.601807 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-06 00:46:04.601821 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:01.041) 0:00:03.111 **** 2025-09-06 00:46:04.601834 | orchestrator | changed: [testbed-manager] 2025-09-06 00:46:04.601847 | orchestrator | 2025-09-06 00:46:04.601860 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-06 00:46:04.601873 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:00.748) 0:00:03.859 **** 2025-09-06 00:46:04.601886 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-06 00:46:04.601908 | orchestrator | 2025-09-06 00:46:04.601921 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-06 00:46:04.601934 | orchestrator | Saturday 06 September 2025 00:45:46 +0000 (0:00:01.385) 0:00:05.245 **** 2025-09-06 00:46:04.601947 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-06 00:46:04.601961 | orchestrator | 2025-09-06 00:46:04.601974 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-06 00:46:04.601987 | orchestrator | Saturday 06 September 2025 00:45:47 +0000 (0:00:00.738) 0:00:05.983 **** 2025-09-06 00:46:04.602000 | orchestrator | ok: [testbed-manager] 2025-09-06 00:46:04.602012 | orchestrator | 2025-09-06 00:46:04.602109 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-06 00:46:04.602123 | orchestrator | Saturday 06 September 2025 00:45:47 +0000 (0:00:00.388) 0:00:06.372 **** 2025-09-06 00:46:04.602136 | orchestrator | ok: [testbed-manager] 2025-09-06 00:46:04.602149 | orchestrator | 2025-09-06 00:46:04.602162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:46:04.602175 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:46:04.602187 | orchestrator | 2025-09-06 00:46:04.602198 | orchestrator | 2025-09-06 00:46:04.602209 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:46:04.602220 | orchestrator | Saturday 06 September 2025 00:45:47 +0000 (0:00:00.314) 0:00:06.686 **** 2025-09-06 00:46:04.602230 | orchestrator | =============================================================================== 2025-09-06 00:46:04.602241 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.39s 2025-09-06 00:46:04.602325 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2025-09-06 00:46:04.602340 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.75s 2025-09-06 00:46:04.602362 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-09-06 00:46:04.602374 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-09-06 00:46:04.602385 | orchestrator | Get home directory of operator user ------------------------------------- 0.65s 2025-09-06 00:46:04.602430 | orchestrator | Create .kube directory -------------------------------------------------- 0.46s 2025-09-06 00:46:04.602442 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2025-09-06 00:46:04.602453 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-09-06 00:46:04.602464 | orchestrator | 2025-09-06 00:46:04.602475 | orchestrator | 2025-09-06 00:46:04.602486 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:46:04.602496 | orchestrator | 2025-09-06 00:46:04.602507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:46:04.602518 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.429) 0:00:00.429 **** 2025-09-06 00:46:04.602529 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:46:04.602540 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:46:04.602551 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:46:04.602562 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:46:04.602573 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:46:04.602584 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:46:04.602594 | orchestrator | 2025-09-06 00:46:04.602605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:46:04.602616 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.690) 0:00:01.119 **** 2025-09-06 00:46:04.602627 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602639 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602650 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602670 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602681 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602692 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-06 00:46:04.602703 | orchestrator | 2025-09-06 00:46:04.602714 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-06 00:46:04.602725 | orchestrator | 2025-09-06 00:46:04.602736 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-06 00:46:04.602747 | orchestrator | Saturday 06 September 2025 00:44:56 +0000 (0:00:00.690) 0:00:01.810 **** 2025-09-06 00:46:04.602766 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:46:04.602779 | orchestrator | 2025-09-06 00:46:04.602790 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-06 00:46:04.602801 | orchestrator | Saturday 06 September 2025 00:44:57 +0000 (0:00:01.498) 0:00:03.308 **** 2025-09-06 00:46:04.602813 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-06 00:46:04.602824 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-06 00:46:04.602835 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-06 00:46:04.602846 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-06 00:46:04.602857 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-06 00:46:04.602868 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-06 00:46:04.602878 | orchestrator | 2025-09-06 00:46:04.602889 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-06 00:46:04.602900 | orchestrator | Saturday 06 September 2025 00:44:59 +0000 (0:00:01.758) 0:00:05.067 **** 2025-09-06 00:46:04.602912 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-06 00:46:04.602922 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-06 00:46:04.602933 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-06 00:46:04.602944 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-06 00:46:04.602955 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-06 00:46:04.602968 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-06 00:46:04.602981 | orchestrator | 2025-09-06 00:46:04.602994 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-06 00:46:04.603008 | orchestrator | Saturday 06 September 2025 00:45:01 +0000 (0:00:01.755) 0:00:06.823 **** 2025-09-06 00:46:04.603020 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-06 00:46:04.603033 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:46:04.603046 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-06 00:46:04.603059 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:46:04.603071 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-06 00:46:04.603084 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:46:04.603096 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-06 00:46:04.603109 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:46:04.603122 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-06 00:46:04.603135 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:46:04.603148 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-06 00:46:04.603159 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:46:04.603169 | orchestrator | 2025-09-06 00:46:04.603180 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-06 00:46:04.603191 | orchestrator | Saturday 06 September 2025 00:45:02 +0000 (0:00:01.375) 0:00:08.198 **** 2025-09-06 00:46:04.603202 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:46:04.603213 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:46:04.603310 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:46:04.603343 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:46:04.603355 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:46:04.603366 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:46:04.603377 | orchestrator | 2025-09-06 00:46:04.603407 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-06 00:46:04.603419 | orchestrator | Saturday 06 September 2025 00:45:03 +0000 (0:00:00.625) 0:00:08.823 **** 2025-09-06 00:46:04.603434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603647 | orchestrator | 2025-09-06 00:46:04.603659 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-06 00:46:04.603670 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:01.727) 0:00:10.551 **** 2025-09-06 00:46:04.603682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.603858 | orchestrator | 2025-09-06 00:46:04.603869 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-06 00:46:04.603880 | orchestrator | Saturday 06 September 2025 00:45:08 +0000 (0:00:03.678) 0:00:14.230 **** 2025-09-06 00:46:04.603891 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:46:04.603902 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:46:04.603913 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:46:04.603924 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:46:04.603935 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:46:04.603946 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:46:04.603960 | orchestrator | 2025-09-06 00:46:04.603974 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-06 00:46:04.603987 | orchestrator | Saturday 06 September 2025 00:45:10 +0000 (0:00:01.306) 0:00:15.536 **** 2025-09-06 00:46:04.604005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-06 00:46:04.604257 | orchestrator | 2025-09-06 00:46:04.604269 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604280 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:02.549) 0:00:18.086 **** 2025-09-06 00:46:04.604292 | orchestrator | 2025-09-06 00:46:04.604379 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604455 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.290) 0:00:18.376 **** 2025-09-06 00:46:04.604467 | orchestrator | 2025-09-06 00:46:04.604478 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604489 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.150) 0:00:18.527 **** 2025-09-06 00:46:04.604510 | orchestrator | 2025-09-06 00:46:04.604521 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604532 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.121) 0:00:18.649 **** 2025-09-06 00:46:04.604543 | orchestrator | 2025-09-06 00:46:04.604553 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604564 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.210) 0:00:18.859 **** 2025-09-06 00:46:04.604575 | orchestrator | 2025-09-06 00:46:04.604586 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-06 00:46:04.604597 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.145) 0:00:19.005 **** 2025-09-06 00:46:04.604608 | orchestrator | 2025-09-06 00:46:04.604619 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-06 00:46:04.604630 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.103) 0:00:19.108 **** 2025-09-06 00:46:04.604640 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:46:04.604651 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:46:04.604662 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:46:04.604673 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:46:04.604684 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:46:04.604695 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:46:04.604706 | orchestrator | 2025-09-06 00:46:04.604717 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-06 00:46:04.604728 | orchestrator | Saturday 06 September 2025 00:45:25 +0000 (0:00:12.146) 0:00:31.256 **** 2025-09-06 00:46:04.604739 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:46:04.604750 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:46:04.604761 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:46:04.604772 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:46:04.604783 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:46:04.604794 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:46:04.604804 | orchestrator | 2025-09-06 00:46:04.604816 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-06 00:46:04.604827 | orchestrator | Saturday 06 September 2025 00:45:27 +0000 (0:00:02.010) 0:00:33.266 **** 2025-09-06 00:46:04.604837 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:46:04.604848 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:46:04.604859 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:46:04.604870 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:46:04.604881 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:46:04.604892 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:46:04.604903 | orchestrator | 2025-09-06 00:46:04.604914 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-06 00:46:04.604925 | orchestrator | Saturday 06 September 2025 00:45:40 +0000 (0:00:12.089) 0:00:45.356 **** 2025-09-06 00:46:04.604936 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-06 00:46:04.604955 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-06 00:46:04.604967 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-06 00:46:04.604978 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-06 00:46:04.604989 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-06 00:46:04.604999 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-06 00:46:04.605008 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-06 00:46:04.605020 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-06 00:46:04.605038 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-06 00:46:04.605050 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-06 00:46:04.605061 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-06 00:46:04.605073 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-06 00:46:04.605085 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605096 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605107 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605118 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605130 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605148 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-06 00:46:04.605159 | orchestrator | 2025-09-06 00:46:04.605171 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-06 00:46:04.605182 | orchestrator | Saturday 06 September 2025 00:45:48 +0000 (0:00:08.022) 0:00:53.379 **** 2025-09-06 00:46:04.605194 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-06 00:46:04.605204 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:46:04.605214 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-06 00:46:04.605224 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:46:04.605234 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-06 00:46:04.605243 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:46:04.605253 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-06 00:46:04.605263 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-06 00:46:04.605273 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-06 00:46:04.605282 | orchestrator | 2025-09-06 00:46:04.605292 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-06 00:46:04.605302 | orchestrator | Saturday 06 September 2025 00:45:50 +0000 (0:00:02.528) 0:00:55.907 **** 2025-09-06 00:46:04.605312 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-06 00:46:04.605321 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:46:04.605331 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-06 00:46:04.605341 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:46:04.605351 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-06 00:46:04.605361 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:46:04.605371 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-06 00:46:04.605380 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-06 00:46:04.605432 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-06 00:46:04.605444 | orchestrator | 2025-09-06 00:46:04.605454 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-06 00:46:04.605464 | orchestrator | Saturday 06 September 2025 00:45:54 +0000 (0:00:03.581) 0:00:59.488 **** 2025-09-06 00:46:04.605474 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:46:04.605483 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:46:04.605493 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:46:04.605503 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:46:04.605512 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:46:04.605522 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:46:04.605538 | orchestrator | 2025-09-06 00:46:04.605548 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:46:04.605558 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:46:04.605569 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:46:04.605585 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:46:04.605595 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 00:46:04.605606 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 00:46:04.605615 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 00:46:04.605625 | orchestrator | 2025-09-06 00:46:04.605635 | orchestrator | 2025-09-06 00:46:04.605645 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:46:04.605654 | orchestrator | Saturday 06 September 2025 00:46:02 +0000 (0:00:08.552) 0:01:08.041 **** 2025-09-06 00:46:04.605664 | orchestrator | =============================================================================== 2025-09-06 00:46:04.605674 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.64s 2025-09-06 00:46:04.605684 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.15s 2025-09-06 00:46:04.605693 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.02s 2025-09-06 00:46:04.605703 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.68s 2025-09-06 00:46:04.605712 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.58s 2025-09-06 00:46:04.605722 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.55s 2025-09-06 00:46:04.605732 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.53s 2025-09-06 00:46:04.605741 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.01s 2025-09-06 00:46:04.605751 | orchestrator | module-load : Load modules ---------------------------------------------- 1.76s 2025-09-06 00:46:04.605761 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.76s 2025-09-06 00:46:04.605771 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.73s 2025-09-06 00:46:04.605780 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.50s 2025-09-06 00:46:04.605790 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.38s 2025-09-06 00:46:04.605798 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.31s 2025-09-06 00:46:04.605806 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2025-09-06 00:46:04.605814 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-09-06 00:46:04.605822 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-09-06 00:46:04.606459 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2025-09-06 00:46:04.606482 | orchestrator | 2025-09-06 00:46:04 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:04.606491 | orchestrator | 2025-09-06 00:46:04 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:04.606499 | orchestrator | 2025-09-06 00:46:04 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:04.606515 | orchestrator | 2025-09-06 00:46:04 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:04.606523 | orchestrator | 2025-09-06 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:07.664680 | orchestrator | 2025-09-06 00:46:07 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:07.664786 | orchestrator | 2025-09-06 00:46:07 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:07.664802 | orchestrator | 2025-09-06 00:46:07 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:07.664814 | orchestrator | 2025-09-06 00:46:07 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:07.664825 | orchestrator | 2025-09-06 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:10.671279 | orchestrator | 2025-09-06 00:46:10 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:10.674975 | orchestrator | 2025-09-06 00:46:10 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:10.675030 | orchestrator | 2025-09-06 00:46:10 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:10.675043 | orchestrator | 2025-09-06 00:46:10 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:10.675055 | orchestrator | 2025-09-06 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:13.710979 | orchestrator | 2025-09-06 00:46:13 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:13.712420 | orchestrator | 2025-09-06 00:46:13 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:13.713144 | orchestrator | 2025-09-06 00:46:13 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:13.714190 | orchestrator | 2025-09-06 00:46:13 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:13.714451 | orchestrator | 2025-09-06 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:16.794114 | orchestrator | 2025-09-06 00:46:16 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:16.794206 | orchestrator | 2025-09-06 00:46:16 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:16.794221 | orchestrator | 2025-09-06 00:46:16 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:16.794233 | orchestrator | 2025-09-06 00:46:16 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:16.794243 | orchestrator | 2025-09-06 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:19.821836 | orchestrator | 2025-09-06 00:46:19 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:19.822970 | orchestrator | 2025-09-06 00:46:19 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:19.824171 | orchestrator | 2025-09-06 00:46:19 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:19.824198 | orchestrator | 2025-09-06 00:46:19 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:19.824210 | orchestrator | 2025-09-06 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:22.884694 | orchestrator | 2025-09-06 00:46:22 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:22.885136 | orchestrator | 2025-09-06 00:46:22 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:22.886348 | orchestrator | 2025-09-06 00:46:22 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:22.887464 | orchestrator | 2025-09-06 00:46:22 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:22.887695 | orchestrator | 2025-09-06 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:25.924022 | orchestrator | 2025-09-06 00:46:25 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:25.924530 | orchestrator | 2025-09-06 00:46:25 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:25.928607 | orchestrator | 2025-09-06 00:46:25 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:25.929454 | orchestrator | 2025-09-06 00:46:25 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:25.929477 | orchestrator | 2025-09-06 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:28.965330 | orchestrator | 2025-09-06 00:46:28 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:28.965461 | orchestrator | 2025-09-06 00:46:28 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:28.966173 | orchestrator | 2025-09-06 00:46:28 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:28.967080 | orchestrator | 2025-09-06 00:46:28 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:28.967107 | orchestrator | 2025-09-06 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:32.011461 | orchestrator | 2025-09-06 00:46:32 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:32.012024 | orchestrator | 2025-09-06 00:46:32 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:32.012803 | orchestrator | 2025-09-06 00:46:32 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:32.013532 | orchestrator | 2025-09-06 00:46:32 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:32.013721 | orchestrator | 2025-09-06 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:35.053127 | orchestrator | 2025-09-06 00:46:35 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:35.053898 | orchestrator | 2025-09-06 00:46:35 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:35.055005 | orchestrator | 2025-09-06 00:46:35 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:35.056007 | orchestrator | 2025-09-06 00:46:35 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:35.056169 | orchestrator | 2025-09-06 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:38.154768 | orchestrator | 2025-09-06 00:46:38 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:38.155236 | orchestrator | 2025-09-06 00:46:38 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:38.156259 | orchestrator | 2025-09-06 00:46:38 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:38.157077 | orchestrator | 2025-09-06 00:46:38 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:38.157103 | orchestrator | 2025-09-06 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:41.180471 | orchestrator | 2025-09-06 00:46:41 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:41.180817 | orchestrator | 2025-09-06 00:46:41 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:41.181245 | orchestrator | 2025-09-06 00:46:41 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:41.182711 | orchestrator | 2025-09-06 00:46:41 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:41.182746 | orchestrator | 2025-09-06 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:44.210868 | orchestrator | 2025-09-06 00:46:44 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:44.212317 | orchestrator | 2025-09-06 00:46:44 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:44.213685 | orchestrator | 2025-09-06 00:46:44 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:44.215324 | orchestrator | 2025-09-06 00:46:44 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:44.215423 | orchestrator | 2025-09-06 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:47.275743 | orchestrator | 2025-09-06 00:46:47 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:47.279114 | orchestrator | 2025-09-06 00:46:47 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:47.280188 | orchestrator | 2025-09-06 00:46:47 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:47.281433 | orchestrator | 2025-09-06 00:46:47 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:47.281459 | orchestrator | 2025-09-06 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:50.326559 | orchestrator | 2025-09-06 00:46:50 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:50.326852 | orchestrator | 2025-09-06 00:46:50 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:50.327852 | orchestrator | 2025-09-06 00:46:50 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:50.328849 | orchestrator | 2025-09-06 00:46:50 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:50.328877 | orchestrator | 2025-09-06 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:53.376695 | orchestrator | 2025-09-06 00:46:53 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:53.382165 | orchestrator | 2025-09-06 00:46:53 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:53.385545 | orchestrator | 2025-09-06 00:46:53 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:53.389063 | orchestrator | 2025-09-06 00:46:53 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:53.389959 | orchestrator | 2025-09-06 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:56.432484 | orchestrator | 2025-09-06 00:46:56 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:56.435455 | orchestrator | 2025-09-06 00:46:56 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:56.438276 | orchestrator | 2025-09-06 00:46:56 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:56.440286 | orchestrator | 2025-09-06 00:46:56 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:56.440812 | orchestrator | 2025-09-06 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:46:59.486855 | orchestrator | 2025-09-06 00:46:59 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:46:59.489125 | orchestrator | 2025-09-06 00:46:59 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:46:59.491349 | orchestrator | 2025-09-06 00:46:59 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:46:59.492932 | orchestrator | 2025-09-06 00:46:59 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:46:59.493402 | orchestrator | 2025-09-06 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:02.542651 | orchestrator | 2025-09-06 00:47:02 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:02.542863 | orchestrator | 2025-09-06 00:47:02 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:02.543736 | orchestrator | 2025-09-06 00:47:02 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:02.544669 | orchestrator | 2025-09-06 00:47:02 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:02.544692 | orchestrator | 2025-09-06 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:05.581387 | orchestrator | 2025-09-06 00:47:05 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:05.586978 | orchestrator | 2025-09-06 00:47:05 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:05.587481 | orchestrator | 2025-09-06 00:47:05 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:05.588460 | orchestrator | 2025-09-06 00:47:05 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:05.588483 | orchestrator | 2025-09-06 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:08.660908 | orchestrator | 2025-09-06 00:47:08 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:08.662142 | orchestrator | 2025-09-06 00:47:08 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:08.664240 | orchestrator | 2025-09-06 00:47:08 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:08.665956 | orchestrator | 2025-09-06 00:47:08 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:08.665979 | orchestrator | 2025-09-06 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:11.702381 | orchestrator | 2025-09-06 00:47:11 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:11.703068 | orchestrator | 2025-09-06 00:47:11 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:11.703873 | orchestrator | 2025-09-06 00:47:11 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:11.705027 | orchestrator | 2025-09-06 00:47:11 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:11.705058 | orchestrator | 2025-09-06 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:14.756472 | orchestrator | 2025-09-06 00:47:14 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:14.758690 | orchestrator | 2025-09-06 00:47:14 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:14.760521 | orchestrator | 2025-09-06 00:47:14 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:14.762932 | orchestrator | 2025-09-06 00:47:14 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:14.763037 | orchestrator | 2025-09-06 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:17.803232 | orchestrator | 2025-09-06 00:47:17 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:17.804443 | orchestrator | 2025-09-06 00:47:17 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:17.805714 | orchestrator | 2025-09-06 00:47:17 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:17.807731 | orchestrator | 2025-09-06 00:47:17 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:17.807755 | orchestrator | 2025-09-06 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:20.849538 | orchestrator | 2025-09-06 00:47:20 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:20.851613 | orchestrator | 2025-09-06 00:47:20 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:20.853442 | orchestrator | 2025-09-06 00:47:20 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:20.855421 | orchestrator | 2025-09-06 00:47:20 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:20.855873 | orchestrator | 2025-09-06 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:23.901917 | orchestrator | 2025-09-06 00:47:23 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:23.902475 | orchestrator | 2025-09-06 00:47:23 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:23.904549 | orchestrator | 2025-09-06 00:47:23 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:23.905468 | orchestrator | 2025-09-06 00:47:23 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:23.906862 | orchestrator | 2025-09-06 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:26.932372 | orchestrator | 2025-09-06 00:47:26 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:26.934481 | orchestrator | 2025-09-06 00:47:26 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:26.936250 | orchestrator | 2025-09-06 00:47:26 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:26.937903 | orchestrator | 2025-09-06 00:47:26 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:26.938104 | orchestrator | 2025-09-06 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:29.984749 | orchestrator | 2025-09-06 00:47:29 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:29.989300 | orchestrator | 2025-09-06 00:47:29 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:29.992722 | orchestrator | 2025-09-06 00:47:29 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:29.994300 | orchestrator | 2025-09-06 00:47:29 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:29.994325 | orchestrator | 2025-09-06 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:33.024849 | orchestrator | 2025-09-06 00:47:33 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:33.024921 | orchestrator | 2025-09-06 00:47:33 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:33.024932 | orchestrator | 2025-09-06 00:47:33 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:33.024941 | orchestrator | 2025-09-06 00:47:33 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:33.024949 | orchestrator | 2025-09-06 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:36.050354 | orchestrator | 2025-09-06 00:47:36 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:36.052746 | orchestrator | 2025-09-06 00:47:36 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state STARTED 2025-09-06 00:47:36.052776 | orchestrator | 2025-09-06 00:47:36 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:36.052805 | orchestrator | 2025-09-06 00:47:36 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:36.052817 | orchestrator | 2025-09-06 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:39.079607 | orchestrator | 2025-09-06 00:47:39 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:39.081588 | orchestrator | 2025-09-06 00:47:39 | INFO  | Task 36612d34-b5ea-42cb-b8a3-f4a14a106ff2 is in state SUCCESS 2025-09-06 00:47:39.083707 | orchestrator | 2025-09-06 00:47:39.083739 | orchestrator | 2025-09-06 00:47:39.083751 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-06 00:47:39.083763 | orchestrator | 2025-09-06 00:47:39.083775 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-06 00:47:39.083786 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.095) 0:00:00.095 **** 2025-09-06 00:47:39.083797 | orchestrator | ok: [localhost] => { 2025-09-06 00:47:39.083810 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-06 00:47:39.083822 | orchestrator | } 2025-09-06 00:47:39.083834 | orchestrator | 2025-09-06 00:47:39.083846 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-06 00:47:39.083857 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.040) 0:00:00.136 **** 2025-09-06 00:47:39.083868 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-06 00:47:39.083881 | orchestrator | ...ignoring 2025-09-06 00:47:39.083893 | orchestrator | 2025-09-06 00:47:39.083905 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-06 00:47:39.083916 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:03.099) 0:00:03.235 **** 2025-09-06 00:47:39.083927 | orchestrator | skipping: [localhost] 2025-09-06 00:47:39.083938 | orchestrator | 2025-09-06 00:47:39.083949 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-06 00:47:39.083960 | orchestrator | Saturday 06 September 2025 00:45:15 +0000 (0:00:00.184) 0:00:03.420 **** 2025-09-06 00:47:39.083971 | orchestrator | ok: [localhost] 2025-09-06 00:47:39.083982 | orchestrator | 2025-09-06 00:47:39.083993 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:47:39.084004 | orchestrator | 2025-09-06 00:47:39.084015 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:47:39.084025 | orchestrator | Saturday 06 September 2025 00:45:15 +0000 (0:00:00.505) 0:00:03.925 **** 2025-09-06 00:47:39.084036 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:47:39.084047 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:47:39.084058 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:47:39.084093 | orchestrator | 2025-09-06 00:47:39.084105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:47:39.084116 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:01.065) 0:00:04.991 **** 2025-09-06 00:47:39.084127 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-06 00:47:39.084139 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-06 00:47:39.084150 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-06 00:47:39.084161 | orchestrator | 2025-09-06 00:47:39.084172 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-06 00:47:39.084182 | orchestrator | 2025-09-06 00:47:39.084193 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-06 00:47:39.084204 | orchestrator | Saturday 06 September 2025 00:45:17 +0000 (0:00:00.901) 0:00:05.892 **** 2025-09-06 00:47:39.084215 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:47:39.084226 | orchestrator | 2025-09-06 00:47:39.084236 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-06 00:47:39.084247 | orchestrator | Saturday 06 September 2025 00:45:17 +0000 (0:00:00.419) 0:00:06.311 **** 2025-09-06 00:47:39.084279 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:47:39.084291 | orchestrator | 2025-09-06 00:47:39.084302 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-06 00:47:39.084315 | orchestrator | Saturday 06 September 2025 00:45:18 +0000 (0:00:01.012) 0:00:07.324 **** 2025-09-06 00:47:39.084329 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084342 | orchestrator | 2025-09-06 00:47:39.084354 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-06 00:47:39.084367 | orchestrator | Saturday 06 September 2025 00:45:19 +0000 (0:00:00.315) 0:00:07.639 **** 2025-09-06 00:47:39.084379 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084392 | orchestrator | 2025-09-06 00:47:39.084404 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-06 00:47:39.084417 | orchestrator | Saturday 06 September 2025 00:45:19 +0000 (0:00:00.355) 0:00:07.995 **** 2025-09-06 00:47:39.084429 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084442 | orchestrator | 2025-09-06 00:47:39.084454 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-06 00:47:39.084467 | orchestrator | Saturday 06 September 2025 00:45:19 +0000 (0:00:00.280) 0:00:08.275 **** 2025-09-06 00:47:39.084479 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084491 | orchestrator | 2025-09-06 00:47:39.084503 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-06 00:47:39.084516 | orchestrator | Saturday 06 September 2025 00:45:20 +0000 (0:00:00.362) 0:00:08.637 **** 2025-09-06 00:47:39.084528 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:47:39.084541 | orchestrator | 2025-09-06 00:47:39.084553 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-06 00:47:39.084566 | orchestrator | Saturday 06 September 2025 00:45:21 +0000 (0:00:00.924) 0:00:09.562 **** 2025-09-06 00:47:39.084578 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:47:39.084591 | orchestrator | 2025-09-06 00:47:39.084617 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-06 00:47:39.084630 | orchestrator | Saturday 06 September 2025 00:45:21 +0000 (0:00:00.777) 0:00:10.339 **** 2025-09-06 00:47:39.084643 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084656 | orchestrator | 2025-09-06 00:47:39.084669 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-06 00:47:39.084681 | orchestrator | Saturday 06 September 2025 00:45:22 +0000 (0:00:00.323) 0:00:10.663 **** 2025-09-06 00:47:39.084691 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.084702 | orchestrator | 2025-09-06 00:47:39.084721 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-06 00:47:39.084741 | orchestrator | Saturday 06 September 2025 00:45:22 +0000 (0:00:00.327) 0:00:10.990 **** 2025-09-06 00:47:39.084757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084801 | orchestrator | 2025-09-06 00:47:39.084812 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-06 00:47:39.084823 | orchestrator | Saturday 06 September 2025 00:45:23 +0000 (0:00:00.864) 0:00:11.855 **** 2025-09-06 00:47:39.084849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.084895 | orchestrator | 2025-09-06 00:47:39.084906 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-06 00:47:39.084917 | orchestrator | Saturday 06 September 2025 00:45:25 +0000 (0:00:02.363) 0:00:14.219 **** 2025-09-06 00:47:39.084928 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-06 00:47:39.085047 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-06 00:47:39.085058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-06 00:47:39.085069 | orchestrator | 2025-09-06 00:47:39.085080 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-06 00:47:39.085091 | orchestrator | Saturday 06 September 2025 00:45:28 +0000 (0:00:02.825) 0:00:17.044 **** 2025-09-06 00:47:39.085102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-06 00:47:39.085113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-06 00:47:39.085124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-06 00:47:39.085144 | orchestrator | 2025-09-06 00:47:39.085156 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-06 00:47:39.085166 | orchestrator | Saturday 06 September 2025 00:45:33 +0000 (0:00:04.472) 0:00:21.517 **** 2025-09-06 00:47:39.085177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-06 00:47:39.085194 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-06 00:47:39.085205 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-06 00:47:39.085216 | orchestrator | 2025-09-06 00:47:39.085227 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-06 00:47:39.085238 | orchestrator | Saturday 06 September 2025 00:45:34 +0000 (0:00:01.812) 0:00:23.329 **** 2025-09-06 00:47:39.085256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-06 00:47:39.085289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-06 00:47:39.085300 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-06 00:47:39.085311 | orchestrator | 2025-09-06 00:47:39.085322 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-06 00:47:39.085333 | orchestrator | Saturday 06 September 2025 00:45:36 +0000 (0:00:02.039) 0:00:25.369 **** 2025-09-06 00:47:39.085343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-06 00:47:39.085354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-06 00:47:39.085365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-06 00:47:39.085376 | orchestrator | 2025-09-06 00:47:39.085387 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-06 00:47:39.085398 | orchestrator | Saturday 06 September 2025 00:45:38 +0000 (0:00:01.716) 0:00:27.085 **** 2025-09-06 00:47:39.085409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-06 00:47:39.085419 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-06 00:47:39.085430 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-06 00:47:39.085441 | orchestrator | 2025-09-06 00:47:39.085452 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-06 00:47:39.085462 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:02.656) 0:00:29.742 **** 2025-09-06 00:47:39.085473 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.085484 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:47:39.085495 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:47:39.085506 | orchestrator | 2025-09-06 00:47:39.085517 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-06 00:47:39.085528 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:00.605) 0:00:30.348 **** 2025-09-06 00:47:39.085541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.085565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.085586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:47:39.085599 | orchestrator | 2025-09-06 00:47:39.085610 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-06 00:47:39.085621 | orchestrator | Saturday 06 September 2025 00:45:43 +0000 (0:00:01.599) 0:00:31.947 **** 2025-09-06 00:47:39.085632 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:47:39.085642 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:47:39.085653 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:47:39.085664 | orchestrator | 2025-09-06 00:47:39.085675 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-06 00:47:39.085686 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:00.974) 0:00:32.922 **** 2025-09-06 00:47:39.085697 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:47:39.085707 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:47:39.085718 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:47:39.085729 | orchestrator | 2025-09-06 00:47:39.085740 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-06 00:47:39.085750 | orchestrator | Saturday 06 September 2025 00:45:51 +0000 (0:00:07.325) 0:00:40.248 **** 2025-09-06 00:47:39.085761 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:47:39.085772 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:47:39.085782 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:47:39.085793 | orchestrator | 2025-09-06 00:47:39.085804 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-06 00:47:39.085815 | orchestrator | 2025-09-06 00:47:39.085826 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-06 00:47:39.085837 | orchestrator | Saturday 06 September 2025 00:45:52 +0000 (0:00:00.515) 0:00:40.763 **** 2025-09-06 00:47:39.085853 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:47:39.085864 | orchestrator | 2025-09-06 00:47:39.085875 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-06 00:47:39.085886 | orchestrator | Saturday 06 September 2025 00:45:53 +0000 (0:00:00.702) 0:00:41.466 **** 2025-09-06 00:47:39.085897 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:47:39.085907 | orchestrator | 2025-09-06 00:47:39.085918 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-06 00:47:39.085929 | orchestrator | Saturday 06 September 2025 00:45:53 +0000 (0:00:00.367) 0:00:41.833 **** 2025-09-06 00:47:39.085940 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:47:39.085951 | orchestrator | 2025-09-06 00:47:39.085961 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-06 00:47:39.085972 | orchestrator | Saturday 06 September 2025 00:45:55 +0000 (0:00:01.876) 0:00:43.710 **** 2025-09-06 00:47:39.085983 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:47:39.085994 | orchestrator | 2025-09-06 00:47:39.086005 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-06 00:47:39.086065 | orchestrator | 2025-09-06 00:47:39.086080 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-06 00:47:39.086091 | orchestrator | Saturday 06 September 2025 00:46:54 +0000 (0:00:59.433) 0:01:43.144 **** 2025-09-06 00:47:39.086102 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:47:39.086112 | orchestrator | 2025-09-06 00:47:39.086123 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-06 00:47:39.086134 | orchestrator | Saturday 06 September 2025 00:46:55 +0000 (0:00:00.684) 0:01:43.828 **** 2025-09-06 00:47:39.086144 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:47:39.086155 | orchestrator | 2025-09-06 00:47:39.086166 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-06 00:47:39.086176 | orchestrator | Saturday 06 September 2025 00:46:55 +0000 (0:00:00.241) 0:01:44.070 **** 2025-09-06 00:47:39.086187 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:47:39.086198 | orchestrator | 2025-09-06 00:47:39.086209 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-06 00:47:39.086219 | orchestrator | Saturday 06 September 2025 00:46:57 +0000 (0:00:01.665) 0:01:45.736 **** 2025-09-06 00:47:39.086230 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:47:39.086241 | orchestrator | 2025-09-06 00:47:39.086251 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-06 00:47:39.086296 | orchestrator | 2025-09-06 00:47:39.086308 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-06 00:47:39.086319 | orchestrator | Saturday 06 September 2025 00:47:14 +0000 (0:00:16.892) 0:02:02.629 **** 2025-09-06 00:47:39.086330 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:47:39.086341 | orchestrator | 2025-09-06 00:47:39.086351 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-06 00:47:39.086362 | orchestrator | Saturday 06 September 2025 00:47:14 +0000 (0:00:00.632) 0:02:03.262 **** 2025-09-06 00:47:39.086378 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:47:39.086389 | orchestrator | 2025-09-06 00:47:39.086400 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-06 00:47:39.086411 | orchestrator | Saturday 06 September 2025 00:47:15 +0000 (0:00:00.264) 0:02:03.527 **** 2025-09-06 00:47:39.086422 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:47:39.086433 | orchestrator | 2025-09-06 00:47:39.086444 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-06 00:47:39.086462 | orchestrator | Saturday 06 September 2025 00:47:16 +0000 (0:00:01.833) 0:02:05.360 **** 2025-09-06 00:47:39.086474 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:47:39.086484 | orchestrator | 2025-09-06 00:47:39.086495 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-06 00:47:39.086506 | orchestrator | 2025-09-06 00:47:39.086516 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-06 00:47:39.086535 | orchestrator | Saturday 06 September 2025 00:47:33 +0000 (0:00:16.459) 0:02:21.820 **** 2025-09-06 00:47:39.086546 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:47:39.086557 | orchestrator | 2025-09-06 00:47:39.086568 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-06 00:47:39.086578 | orchestrator | Saturday 06 September 2025 00:47:34 +0000 (0:00:00.673) 0:02:22.493 **** 2025-09-06 00:47:39.086589 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-06 00:47:39.086600 | orchestrator | enable_outward_rabbitmq_True 2025-09-06 00:47:39.086611 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-06 00:47:39.086621 | orchestrator | outward_rabbitmq_restart 2025-09-06 00:47:39.086632 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:47:39.086643 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:47:39.086654 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:47:39.086664 | orchestrator | 2025-09-06 00:47:39.086675 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-06 00:47:39.086686 | orchestrator | skipping: no hosts matched 2025-09-06 00:47:39.086696 | orchestrator | 2025-09-06 00:47:39.086707 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-06 00:47:39.086718 | orchestrator | skipping: no hosts matched 2025-09-06 00:47:39.086729 | orchestrator | 2025-09-06 00:47:39.086740 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-06 00:47:39.086750 | orchestrator | skipping: no hosts matched 2025-09-06 00:47:39.086761 | orchestrator | 2025-09-06 00:47:39.086772 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:47:39.086783 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-06 00:47:39.086795 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-06 00:47:39.086806 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:47:39.086817 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:47:39.086828 | orchestrator | 2025-09-06 00:47:39.086839 | orchestrator | 2025-09-06 00:47:39.086849 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:47:39.086861 | orchestrator | Saturday 06 September 2025 00:47:37 +0000 (0:00:03.053) 0:02:25.546 **** 2025-09-06 00:47:39.086871 | orchestrator | =============================================================================== 2025-09-06 00:47:39.086882 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 92.79s 2025-09-06 00:47:39.086893 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.33s 2025-09-06 00:47:39.086904 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.38s 2025-09-06 00:47:39.086915 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.47s 2025-09-06 00:47:39.087016 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.10s 2025-09-06 00:47:39.087030 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.05s 2025-09-06 00:47:39.087041 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.83s 2025-09-06 00:47:39.087052 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.66s 2025-09-06 00:47:39.087063 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.36s 2025-09-06 00:47:39.087073 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.04s 2025-09-06 00:47:39.087084 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.02s 2025-09-06 00:47:39.087102 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.81s 2025-09-06 00:47:39.087113 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.72s 2025-09-06 00:47:39.087124 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.60s 2025-09-06 00:47:39.087135 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.07s 2025-09-06 00:47:39.087146 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2025-09-06 00:47:39.087157 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2025-09-06 00:47:39.087168 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.92s 2025-09-06 00:47:39.087178 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-09-06 00:47:39.087201 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.87s 2025-09-06 00:47:39.087213 | orchestrator | 2025-09-06 00:47:39 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:39.087229 | orchestrator | 2025-09-06 00:47:39 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:39.087515 | orchestrator | 2025-09-06 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:42.121395 | orchestrator | 2025-09-06 00:47:42 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:42.122117 | orchestrator | 2025-09-06 00:47:42 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:42.123816 | orchestrator | 2025-09-06 00:47:42 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:42.123852 | orchestrator | 2025-09-06 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:45.163079 | orchestrator | 2025-09-06 00:47:45 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:45.167305 | orchestrator | 2025-09-06 00:47:45 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:45.167910 | orchestrator | 2025-09-06 00:47:45 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:45.168100 | orchestrator | 2025-09-06 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:48.204891 | orchestrator | 2025-09-06 00:47:48 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:48.206631 | orchestrator | 2025-09-06 00:47:48 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:48.207946 | orchestrator | 2025-09-06 00:47:48 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:48.207965 | orchestrator | 2025-09-06 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:51.248681 | orchestrator | 2025-09-06 00:47:51 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:51.248915 | orchestrator | 2025-09-06 00:47:51 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:51.251167 | orchestrator | 2025-09-06 00:47:51 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:51.251638 | orchestrator | 2025-09-06 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:54.294011 | orchestrator | 2025-09-06 00:47:54 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:54.295815 | orchestrator | 2025-09-06 00:47:54 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:54.296004 | orchestrator | 2025-09-06 00:47:54 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:54.296204 | orchestrator | 2025-09-06 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:47:57.336123 | orchestrator | 2025-09-06 00:47:57 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:47:57.336759 | orchestrator | 2025-09-06 00:47:57 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:47:57.338490 | orchestrator | 2025-09-06 00:47:57 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:47:57.338520 | orchestrator | 2025-09-06 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:00.403978 | orchestrator | 2025-09-06 00:48:00 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:00.406272 | orchestrator | 2025-09-06 00:48:00 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:00.408675 | orchestrator | 2025-09-06 00:48:00 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:00.409092 | orchestrator | 2025-09-06 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:03.457199 | orchestrator | 2025-09-06 00:48:03 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:03.458011 | orchestrator | 2025-09-06 00:48:03 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:03.459004 | orchestrator | 2025-09-06 00:48:03 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:03.459029 | orchestrator | 2025-09-06 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:06.491924 | orchestrator | 2025-09-06 00:48:06 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:06.492398 | orchestrator | 2025-09-06 00:48:06 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:06.493632 | orchestrator | 2025-09-06 00:48:06 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:06.493656 | orchestrator | 2025-09-06 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:09.532655 | orchestrator | 2025-09-06 00:48:09 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:09.533301 | orchestrator | 2025-09-06 00:48:09 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:09.534176 | orchestrator | 2025-09-06 00:48:09 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:09.534201 | orchestrator | 2025-09-06 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:12.576455 | orchestrator | 2025-09-06 00:48:12 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:12.577996 | orchestrator | 2025-09-06 00:48:12 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:12.578812 | orchestrator | 2025-09-06 00:48:12 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:12.578844 | orchestrator | 2025-09-06 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:15.607690 | orchestrator | 2025-09-06 00:48:15 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:15.608053 | orchestrator | 2025-09-06 00:48:15 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:15.608746 | orchestrator | 2025-09-06 00:48:15 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:15.608770 | orchestrator | 2025-09-06 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:18.658376 | orchestrator | 2025-09-06 00:48:18 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:18.660503 | orchestrator | 2025-09-06 00:48:18 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:18.662845 | orchestrator | 2025-09-06 00:48:18 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:18.663235 | orchestrator | 2025-09-06 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:21.707666 | orchestrator | 2025-09-06 00:48:21 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:21.709231 | orchestrator | 2025-09-06 00:48:21 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:21.710971 | orchestrator | 2025-09-06 00:48:21 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:21.711316 | orchestrator | 2025-09-06 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:24.750463 | orchestrator | 2025-09-06 00:48:24 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:24.750566 | orchestrator | 2025-09-06 00:48:24 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:24.751086 | orchestrator | 2025-09-06 00:48:24 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:24.751187 | orchestrator | 2025-09-06 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:27.785110 | orchestrator | 2025-09-06 00:48:27 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:27.788593 | orchestrator | 2025-09-06 00:48:27 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:27.790332 | orchestrator | 2025-09-06 00:48:27 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:27.790407 | orchestrator | 2025-09-06 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:30.825416 | orchestrator | 2025-09-06 00:48:30 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:30.827043 | orchestrator | 2025-09-06 00:48:30 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:30.828756 | orchestrator | 2025-09-06 00:48:30 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:30.828788 | orchestrator | 2025-09-06 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:33.869583 | orchestrator | 2025-09-06 00:48:33 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:33.873350 | orchestrator | 2025-09-06 00:48:33 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:33.875289 | orchestrator | 2025-09-06 00:48:33 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:33.875319 | orchestrator | 2025-09-06 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:36.913945 | orchestrator | 2025-09-06 00:48:36 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:36.918881 | orchestrator | 2025-09-06 00:48:36 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:36.919985 | orchestrator | 2025-09-06 00:48:36 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:36.920009 | orchestrator | 2025-09-06 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:39.956994 | orchestrator | 2025-09-06 00:48:39 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:39.959216 | orchestrator | 2025-09-06 00:48:39 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:39.959924 | orchestrator | 2025-09-06 00:48:39 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:39.959950 | orchestrator | 2025-09-06 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:42.997069 | orchestrator | 2025-09-06 00:48:42 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:42.999829 | orchestrator | 2025-09-06 00:48:43 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state STARTED 2025-09-06 00:48:43.001323 | orchestrator | 2025-09-06 00:48:43 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:43.001349 | orchestrator | 2025-09-06 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:46.056662 | orchestrator | 2025-09-06 00:48:46.056764 | orchestrator | 2025-09-06 00:48:46.056778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:48:46.056791 | orchestrator | 2025-09-06 00:48:46.056803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:48:46.056815 | orchestrator | Saturday 06 September 2025 00:46:07 +0000 (0:00:00.204) 0:00:00.204 **** 2025-09-06 00:48:46.056826 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:48:46.056838 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:48:46.056849 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:48:46.056860 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.056871 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.056882 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.056893 | orchestrator | 2025-09-06 00:48:46.056904 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:48:46.056915 | orchestrator | Saturday 06 September 2025 00:46:08 +0000 (0:00:00.623) 0:00:00.828 **** 2025-09-06 00:48:46.056926 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-06 00:48:46.056937 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-06 00:48:46.056948 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-06 00:48:46.056959 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-06 00:48:46.056970 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-06 00:48:46.056980 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-06 00:48:46.056991 | orchestrator | 2025-09-06 00:48:46.057002 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-06 00:48:46.057013 | orchestrator | 2025-09-06 00:48:46.057024 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-06 00:48:46.057034 | orchestrator | Saturday 06 September 2025 00:46:09 +0000 (0:00:00.870) 0:00:01.698 **** 2025-09-06 00:48:46.057047 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:48:46.057059 | orchestrator | 2025-09-06 00:48:46.057070 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-06 00:48:46.057081 | orchestrator | Saturday 06 September 2025 00:46:10 +0000 (0:00:01.072) 0:00:02.771 **** 2025-09-06 00:48:46.057094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057284 | orchestrator | 2025-09-06 00:48:46.057315 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-06 00:48:46.057329 | orchestrator | Saturday 06 September 2025 00:46:11 +0000 (0:00:01.323) 0:00:04.095 **** 2025-09-06 00:48:46.057342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057437 | orchestrator | 2025-09-06 00:48:46.057451 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-06 00:48:46.057464 | orchestrator | Saturday 06 September 2025 00:46:13 +0000 (0:00:01.579) 0:00:05.674 **** 2025-09-06 00:48:46.057477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057574 | orchestrator | 2025-09-06 00:48:46.057594 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-06 00:48:46.057605 | orchestrator | Saturday 06 September 2025 00:46:14 +0000 (0:00:01.034) 0:00:06.709 **** 2025-09-06 00:48:46.057617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057688 | orchestrator | 2025-09-06 00:48:46.057705 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-06 00:48:46.057716 | orchestrator | Saturday 06 September 2025 00:46:15 +0000 (0:00:01.451) 0:00:08.160 **** 2025-09-06 00:48:46.057727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.057804 | orchestrator | 2025-09-06 00:48:46.057815 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-06 00:48:46.057826 | orchestrator | Saturday 06 September 2025 00:46:17 +0000 (0:00:01.778) 0:00:09.939 **** 2025-09-06 00:48:46.057837 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:48:46.057848 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:48:46.057859 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:48:46.057870 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.057880 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.057891 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.057902 | orchestrator | 2025-09-06 00:48:46.057912 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-06 00:48:46.057923 | orchestrator | Saturday 06 September 2025 00:46:20 +0000 (0:00:02.713) 0:00:12.652 **** 2025-09-06 00:48:46.057934 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-06 00:48:46.057946 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-06 00:48:46.057957 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-06 00:48:46.057967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-06 00:48:46.057978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-06 00:48:46.057989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-06 00:48:46.057999 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058105 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-06 00:48:46.058157 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058219 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058231 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058242 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058253 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-06 00:48:46.058275 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058287 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058308 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-06 00:48:46.058340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058351 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058399 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-06 00:48:46.058410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058420 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058454 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058464 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-06 00:48:46.058475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-06 00:48:46.058486 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-06 00:48:46.058505 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-06 00:48:46.058524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-06 00:48:46.058554 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-06 00:48:46.058571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-06 00:48:46.058586 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-06 00:48:46.058597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-06 00:48:46.058615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-06 00:48:46.058626 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-06 00:48:46.058637 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-06 00:48:46.058648 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-06 00:48:46.058659 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-06 00:48:46.058670 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-06 00:48:46.058681 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-06 00:48:46.058692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-06 00:48:46.058702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-06 00:48:46.058713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-06 00:48:46.058724 | orchestrator | 2025-09-06 00:48:46.058734 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058745 | orchestrator | Saturday 06 September 2025 00:46:39 +0000 (0:00:19.798) 0:00:32.451 **** 2025-09-06 00:48:46.058756 | orchestrator | 2025-09-06 00:48:46.058767 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058777 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.202) 0:00:32.654 **** 2025-09-06 00:48:46.058788 | orchestrator | 2025-09-06 00:48:46.058798 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058809 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.060) 0:00:32.714 **** 2025-09-06 00:48:46.058820 | orchestrator | 2025-09-06 00:48:46.058831 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058841 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.068) 0:00:32.783 **** 2025-09-06 00:48:46.058852 | orchestrator | 2025-09-06 00:48:46.058863 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058873 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.087) 0:00:32.870 **** 2025-09-06 00:48:46.058884 | orchestrator | 2025-09-06 00:48:46.058894 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-06 00:48:46.058905 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.063) 0:00:32.934 **** 2025-09-06 00:48:46.058915 | orchestrator | 2025-09-06 00:48:46.058926 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-06 00:48:46.058941 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.058) 0:00:32.992 **** 2025-09-06 00:48:46.058952 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:48:46.058970 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:48:46.058981 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:48:46.058991 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059002 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059013 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059023 | orchestrator | 2025-09-06 00:48:46.059034 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-06 00:48:46.059045 | orchestrator | Saturday 06 September 2025 00:46:41 +0000 (0:00:01.468) 0:00:34.460 **** 2025-09-06 00:48:46.059056 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.059067 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:48:46.059077 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.059088 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:48:46.059098 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:48:46.059109 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.059120 | orchestrator | 2025-09-06 00:48:46.059131 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-06 00:48:46.059141 | orchestrator | 2025-09-06 00:48:46.059152 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-06 00:48:46.059183 | orchestrator | Saturday 06 September 2025 00:47:22 +0000 (0:00:40.353) 0:01:14.814 **** 2025-09-06 00:48:46.059195 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:48:46.059206 | orchestrator | 2025-09-06 00:48:46.059217 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-06 00:48:46.059227 | orchestrator | Saturday 06 September 2025 00:47:23 +0000 (0:00:00.805) 0:01:15.619 **** 2025-09-06 00:48:46.059238 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:48:46.059249 | orchestrator | 2025-09-06 00:48:46.059260 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-06 00:48:46.059270 | orchestrator | Saturday 06 September 2025 00:47:23 +0000 (0:00:00.681) 0:01:16.300 **** 2025-09-06 00:48:46.059281 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059292 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059303 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059313 | orchestrator | 2025-09-06 00:48:46.059324 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-06 00:48:46.059335 | orchestrator | Saturday 06 September 2025 00:47:25 +0000 (0:00:01.359) 0:01:17.659 **** 2025-09-06 00:48:46.059346 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059356 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059367 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059384 | orchestrator | 2025-09-06 00:48:46.059396 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-06 00:48:46.059407 | orchestrator | Saturday 06 September 2025 00:47:25 +0000 (0:00:00.280) 0:01:17.940 **** 2025-09-06 00:48:46.059417 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059429 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059439 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059450 | orchestrator | 2025-09-06 00:48:46.059461 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-06 00:48:46.059472 | orchestrator | Saturday 06 September 2025 00:47:25 +0000 (0:00:00.263) 0:01:18.203 **** 2025-09-06 00:48:46.059483 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059494 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059505 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059516 | orchestrator | 2025-09-06 00:48:46.059527 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-06 00:48:46.059538 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:00.283) 0:01:18.487 **** 2025-09-06 00:48:46.059549 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.059559 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.059570 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.059581 | orchestrator | 2025-09-06 00:48:46.059592 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-06 00:48:46.059609 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:00.440) 0:01:18.927 **** 2025-09-06 00:48:46.059620 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059631 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059641 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059652 | orchestrator | 2025-09-06 00:48:46.059663 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-06 00:48:46.059674 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:00.300) 0:01:19.228 **** 2025-09-06 00:48:46.059685 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059695 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059706 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059717 | orchestrator | 2025-09-06 00:48:46.059728 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-06 00:48:46.059739 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.255) 0:01:19.483 **** 2025-09-06 00:48:46.059749 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059760 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059771 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059782 | orchestrator | 2025-09-06 00:48:46.059792 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-06 00:48:46.059803 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.261) 0:01:19.745 **** 2025-09-06 00:48:46.059814 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059825 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059836 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059846 | orchestrator | 2025-09-06 00:48:46.059857 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-06 00:48:46.059868 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.368) 0:01:20.113 **** 2025-09-06 00:48:46.059879 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059889 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059900 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059911 | orchestrator | 2025-09-06 00:48:46.059922 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-06 00:48:46.059937 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.259) 0:01:20.373 **** 2025-09-06 00:48:46.059948 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.059959 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.059970 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.059980 | orchestrator | 2025-09-06 00:48:46.059991 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-06 00:48:46.060002 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.257) 0:01:20.630 **** 2025-09-06 00:48:46.060013 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060024 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060034 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060045 | orchestrator | 2025-09-06 00:48:46.060056 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-06 00:48:46.060067 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.335) 0:01:20.965 **** 2025-09-06 00:48:46.060078 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060088 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060099 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060110 | orchestrator | 2025-09-06 00:48:46.060121 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-06 00:48:46.060132 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.268) 0:01:21.234 **** 2025-09-06 00:48:46.060143 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060153 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060216 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060229 | orchestrator | 2025-09-06 00:48:46.060240 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-06 00:48:46.060262 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.409) 0:01:21.643 **** 2025-09-06 00:48:46.060273 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060284 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060295 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060305 | orchestrator | 2025-09-06 00:48:46.060316 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-06 00:48:46.060327 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.254) 0:01:21.897 **** 2025-09-06 00:48:46.060338 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060349 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060360 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060370 | orchestrator | 2025-09-06 00:48:46.060381 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-06 00:48:46.060392 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.301) 0:01:22.199 **** 2025-09-06 00:48:46.060403 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060414 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060432 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060443 | orchestrator | 2025-09-06 00:48:46.060454 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-06 00:48:46.060465 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.264) 0:01:22.464 **** 2025-09-06 00:48:46.060476 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:48:46.060487 | orchestrator | 2025-09-06 00:48:46.060498 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-06 00:48:46.060509 | orchestrator | Saturday 06 September 2025 00:47:30 +0000 (0:00:00.684) 0:01:23.148 **** 2025-09-06 00:48:46.060520 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.060531 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.060542 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.060552 | orchestrator | 2025-09-06 00:48:46.060562 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-06 00:48:46.060571 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.381) 0:01:23.529 **** 2025-09-06 00:48:46.060581 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.060591 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.060600 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.060610 | orchestrator | 2025-09-06 00:48:46.060620 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-06 00:48:46.060630 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.374) 0:01:23.903 **** 2025-09-06 00:48:46.060639 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060649 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060659 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060668 | orchestrator | 2025-09-06 00:48:46.060678 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-06 00:48:46.060688 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.416) 0:01:24.320 **** 2025-09-06 00:48:46.060698 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060707 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060717 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060726 | orchestrator | 2025-09-06 00:48:46.060736 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-06 00:48:46.060746 | orchestrator | Saturday 06 September 2025 00:47:32 +0000 (0:00:00.299) 0:01:24.620 **** 2025-09-06 00:48:46.060756 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060766 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060776 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060785 | orchestrator | 2025-09-06 00:48:46.060795 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-06 00:48:46.060805 | orchestrator | Saturday 06 September 2025 00:47:32 +0000 (0:00:00.331) 0:01:24.951 **** 2025-09-06 00:48:46.060820 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060830 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060839 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060849 | orchestrator | 2025-09-06 00:48:46.060859 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-06 00:48:46.060869 | orchestrator | Saturday 06 September 2025 00:47:32 +0000 (0:00:00.346) 0:01:25.297 **** 2025-09-06 00:48:46.060878 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060888 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060898 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060907 | orchestrator | 2025-09-06 00:48:46.060917 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-06 00:48:46.060931 | orchestrator | Saturday 06 September 2025 00:47:33 +0000 (0:00:00.630) 0:01:25.928 **** 2025-09-06 00:48:46.060941 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.060951 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.060960 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.060970 | orchestrator | 2025-09-06 00:48:46.060980 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-06 00:48:46.060990 | orchestrator | Saturday 06 September 2025 00:47:33 +0000 (0:00:00.447) 0:01:26.376 **** 2025-09-06 00:48:46.061000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2025-09-06 00:48:46 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:46.061053 | orchestrator | 2025-09-06 00:48:46 | INFO  | Task 264acd0f-995a-42ce-879d-4f30e0c1f31b is in state SUCCESS 2025-09-06 00:48:46.061063 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061136 | orchestrator | 2025-09-06 00:48:46.061146 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-06 00:48:46.061156 | orchestrator | Saturday 06 September 2025 00:47:35 +0000 (0:00:01.625) 0:01:28.001 **** 2025-09-06 00:48:46.061181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061286 | orchestrator | 2025-09-06 00:48:46.061296 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-06 00:48:46.061305 | orchestrator | Saturday 06 September 2025 00:47:39 +0000 (0:00:03.706) 0:01:31.708 **** 2025-09-06 00:48:46.061319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.061422 | orchestrator | 2025-09-06 00:48:46.061431 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.061441 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:02.033) 0:01:33.741 **** 2025-09-06 00:48:46.061451 | orchestrator | 2025-09-06 00:48:46.061461 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.061470 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:00.190) 0:01:33.932 **** 2025-09-06 00:48:46.061480 | orchestrator | 2025-09-06 00:48:46.061490 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.061499 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:00.060) 0:01:33.992 **** 2025-09-06 00:48:46.061509 | orchestrator | 2025-09-06 00:48:46.061519 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-06 00:48:46.061528 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:00.059) 0:01:34.052 **** 2025-09-06 00:48:46.061538 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.061548 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.061557 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.061567 | orchestrator | 2025-09-06 00:48:46.061580 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-06 00:48:46.061590 | orchestrator | Saturday 06 September 2025 00:47:48 +0000 (0:00:07.396) 0:01:41.448 **** 2025-09-06 00:48:46.061600 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.061610 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.061619 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.061629 | orchestrator | 2025-09-06 00:48:46.061638 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-06 00:48:46.061648 | orchestrator | Saturday 06 September 2025 00:47:56 +0000 (0:00:07.729) 0:01:49.178 **** 2025-09-06 00:48:46.061658 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.061667 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.061677 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.061687 | orchestrator | 2025-09-06 00:48:46.061696 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-06 00:48:46.061706 | orchestrator | Saturday 06 September 2025 00:48:04 +0000 (0:00:07.562) 0:01:56.741 **** 2025-09-06 00:48:46.061715 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.061725 | orchestrator | 2025-09-06 00:48:46.061734 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-06 00:48:46.061744 | orchestrator | Saturday 06 September 2025 00:48:04 +0000 (0:00:00.132) 0:01:56.873 **** 2025-09-06 00:48:46.061754 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.061763 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.061773 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.061787 | orchestrator | 2025-09-06 00:48:46.061797 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-06 00:48:46.061807 | orchestrator | Saturday 06 September 2025 00:48:05 +0000 (0:00:01.053) 0:01:57.926 **** 2025-09-06 00:48:46.061816 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.061826 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.061835 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.061845 | orchestrator | 2025-09-06 00:48:46.061855 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-06 00:48:46.061864 | orchestrator | Saturday 06 September 2025 00:48:06 +0000 (0:00:00.677) 0:01:58.604 **** 2025-09-06 00:48:46.061874 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.061884 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.061893 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.061903 | orchestrator | 2025-09-06 00:48:46.061912 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-06 00:48:46.061922 | orchestrator | Saturday 06 September 2025 00:48:06 +0000 (0:00:00.786) 0:01:59.390 **** 2025-09-06 00:48:46.061932 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.061941 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.061956 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.061966 | orchestrator | 2025-09-06 00:48:46.061976 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-06 00:48:46.061986 | orchestrator | Saturday 06 September 2025 00:48:07 +0000 (0:00:00.669) 0:02:00.059 **** 2025-09-06 00:48:46.061996 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.062005 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.062040 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.062052 | orchestrator | 2025-09-06 00:48:46.062062 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-06 00:48:46.062072 | orchestrator | Saturday 06 September 2025 00:48:08 +0000 (0:00:01.057) 0:02:01.117 **** 2025-09-06 00:48:46.062081 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.062091 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.062100 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.062110 | orchestrator | 2025-09-06 00:48:46.062119 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-06 00:48:46.062128 | orchestrator | Saturday 06 September 2025 00:48:09 +0000 (0:00:00.720) 0:02:01.837 **** 2025-09-06 00:48:46.062138 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.062147 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.062157 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.062182 | orchestrator | 2025-09-06 00:48:46.062192 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-06 00:48:46.062202 | orchestrator | Saturday 06 September 2025 00:48:09 +0000 (0:00:00.390) 0:02:02.228 **** 2025-09-06 00:48:46.062212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062232 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062262 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062273 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062283 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062293 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062310 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062320 | orchestrator | 2025-09-06 00:48:46.062330 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-06 00:48:46.062340 | orchestrator | Saturday 06 September 2025 00:48:11 +0000 (0:00:01.415) 0:02:03.644 **** 2025-09-06 00:48:46.062349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062359 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062369 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062419 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062449 | orchestrator | 2025-09-06 00:48:46.062459 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-06 00:48:46.062474 | orchestrator | Saturday 06 September 2025 00:48:16 +0000 (0:00:05.133) 0:02:08.778 **** 2025-09-06 00:48:46.062484 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062494 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 00:48:46.062611 | orchestrator | 2025-09-06 00:48:46.062622 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.062631 | orchestrator | Saturday 06 September 2025 00:48:18 +0000 (0:00:02.550) 0:02:11.328 **** 2025-09-06 00:48:46.062641 | orchestrator | 2025-09-06 00:48:46.062651 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.062660 | orchestrator | Saturday 06 September 2025 00:48:18 +0000 (0:00:00.071) 0:02:11.399 **** 2025-09-06 00:48:46.062670 | orchestrator | 2025-09-06 00:48:46.062679 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-06 00:48:46.062689 | orchestrator | Saturday 06 September 2025 00:48:19 +0000 (0:00:00.074) 0:02:11.474 **** 2025-09-06 00:48:46.062699 | orchestrator | 2025-09-06 00:48:46.062713 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-06 00:48:46.062723 | orchestrator | Saturday 06 September 2025 00:48:19 +0000 (0:00:00.091) 0:02:11.565 **** 2025-09-06 00:48:46.062733 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.062742 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.062752 | orchestrator | 2025-09-06 00:48:46.062762 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-06 00:48:46.062771 | orchestrator | Saturday 06 September 2025 00:48:25 +0000 (0:00:06.070) 0:02:17.635 **** 2025-09-06 00:48:46.062781 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.062790 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.062800 | orchestrator | 2025-09-06 00:48:46.062809 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-06 00:48:46.062819 | orchestrator | Saturday 06 September 2025 00:48:31 +0000 (0:00:06.423) 0:02:24.059 **** 2025-09-06 00:48:46.062834 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:48:46.062844 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:48:46.062853 | orchestrator | 2025-09-06 00:48:46.062863 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-06 00:48:46.062872 | orchestrator | Saturday 06 September 2025 00:48:38 +0000 (0:00:06.623) 0:02:30.682 **** 2025-09-06 00:48:46.062882 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:48:46.062891 | orchestrator | 2025-09-06 00:48:46.062901 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-06 00:48:46.062911 | orchestrator | Saturday 06 September 2025 00:48:38 +0000 (0:00:00.131) 0:02:30.813 **** 2025-09-06 00:48:46.062920 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.062930 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.062939 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.062949 | orchestrator | 2025-09-06 00:48:46.062958 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-06 00:48:46.062968 | orchestrator | Saturday 06 September 2025 00:48:39 +0000 (0:00:00.862) 0:02:31.676 **** 2025-09-06 00:48:46.062978 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.062987 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.062997 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.063006 | orchestrator | 2025-09-06 00:48:46.063016 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-06 00:48:46.063025 | orchestrator | Saturday 06 September 2025 00:48:39 +0000 (0:00:00.610) 0:02:32.287 **** 2025-09-06 00:48:46.063035 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.063044 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.063054 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.063063 | orchestrator | 2025-09-06 00:48:46.063073 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-06 00:48:46.063082 | orchestrator | Saturday 06 September 2025 00:48:40 +0000 (0:00:00.751) 0:02:33.038 **** 2025-09-06 00:48:46.063092 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:48:46.063101 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:48:46.063111 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:48:46.063121 | orchestrator | 2025-09-06 00:48:46.063130 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-06 00:48:46.063140 | orchestrator | Saturday 06 September 2025 00:48:41 +0000 (0:00:00.845) 0:02:33.883 **** 2025-09-06 00:48:46.063150 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.063159 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.063212 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.063222 | orchestrator | 2025-09-06 00:48:46.063232 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-06 00:48:46.063246 | orchestrator | Saturday 06 September 2025 00:48:42 +0000 (0:00:00.808) 0:02:34.692 **** 2025-09-06 00:48:46.063255 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:48:46.063265 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:48:46.063275 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:48:46.063284 | orchestrator | 2025-09-06 00:48:46.063294 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:48:46.063303 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-06 00:48:46.063314 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-06 00:48:46.063324 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-06 00:48:46.063333 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:48:46.063343 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:48:46.063359 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:48:46.063369 | orchestrator | 2025-09-06 00:48:46.063378 | orchestrator | 2025-09-06 00:48:46.063388 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:48:46.063398 | orchestrator | Saturday 06 September 2025 00:48:43 +0000 (0:00:00.840) 0:02:35.533 **** 2025-09-06 00:48:46.063407 | orchestrator | =============================================================================== 2025-09-06 00:48:46.063417 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 40.35s 2025-09-06 00:48:46.063427 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.80s 2025-09-06 00:48:46.063436 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.19s 2025-09-06 00:48:46.063446 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.15s 2025-09-06 00:48:46.063462 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.47s 2025-09-06 00:48:46.063472 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.13s 2025-09-06 00:48:46.063481 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.71s 2025-09-06 00:48:46.063491 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.71s 2025-09-06 00:48:46.063500 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.55s 2025-09-06 00:48:46.063510 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2025-09-06 00:48:46.063519 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.78s 2025-09-06 00:48:46.063529 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.63s 2025-09-06 00:48:46.063539 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.58s 2025-09-06 00:48:46.063548 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.47s 2025-09-06 00:48:46.063558 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.45s 2025-09-06 00:48:46.063567 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2025-09-06 00:48:46.063577 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.36s 2025-09-06 00:48:46.063586 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.32s 2025-09-06 00:48:46.063596 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.07s 2025-09-06 00:48:46.063605 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.06s 2025-09-06 00:48:46.063615 | orchestrator | 2025-09-06 00:48:46 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:46.063625 | orchestrator | 2025-09-06 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:49.096198 | orchestrator | 2025-09-06 00:48:49 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:49.096396 | orchestrator | 2025-09-06 00:48:49 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:49.096516 | orchestrator | 2025-09-06 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:52.135586 | orchestrator | 2025-09-06 00:48:52 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:52.139442 | orchestrator | 2025-09-06 00:48:52 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:52.139474 | orchestrator | 2025-09-06 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:55.171348 | orchestrator | 2025-09-06 00:48:55 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:55.171996 | orchestrator | 2025-09-06 00:48:55 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:55.172313 | orchestrator | 2025-09-06 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:48:58.211841 | orchestrator | 2025-09-06 00:48:58 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:48:58.215570 | orchestrator | 2025-09-06 00:48:58 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:48:58.215600 | orchestrator | 2025-09-06 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:01.255312 | orchestrator | 2025-09-06 00:49:01 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:01.256254 | orchestrator | 2025-09-06 00:49:01 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:01.256288 | orchestrator | 2025-09-06 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:04.299649 | orchestrator | 2025-09-06 00:49:04 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:04.300269 | orchestrator | 2025-09-06 00:49:04 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:04.300668 | orchestrator | 2025-09-06 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:07.344211 | orchestrator | 2025-09-06 00:49:07 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:07.344537 | orchestrator | 2025-09-06 00:49:07 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:07.344565 | orchestrator | 2025-09-06 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:10.395101 | orchestrator | 2025-09-06 00:49:10 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:10.397001 | orchestrator | 2025-09-06 00:49:10 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:10.397171 | orchestrator | 2025-09-06 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:13.440928 | orchestrator | 2025-09-06 00:49:13 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:13.442767 | orchestrator | 2025-09-06 00:49:13 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:13.442794 | orchestrator | 2025-09-06 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:16.487915 | orchestrator | 2025-09-06 00:49:16 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:16.488938 | orchestrator | 2025-09-06 00:49:16 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:16.488970 | orchestrator | 2025-09-06 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:19.540440 | orchestrator | 2025-09-06 00:49:19 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:19.540539 | orchestrator | 2025-09-06 00:49:19 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:19.540552 | orchestrator | 2025-09-06 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:22.587350 | orchestrator | 2025-09-06 00:49:22 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:22.589092 | orchestrator | 2025-09-06 00:49:22 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:22.589127 | orchestrator | 2025-09-06 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:25.634810 | orchestrator | 2025-09-06 00:49:25 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:25.636745 | orchestrator | 2025-09-06 00:49:25 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:25.637039 | orchestrator | 2025-09-06 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:28.679498 | orchestrator | 2025-09-06 00:49:28 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:28.679805 | orchestrator | 2025-09-06 00:49:28 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:28.679842 | orchestrator | 2025-09-06 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:31.728052 | orchestrator | 2025-09-06 00:49:31 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:31.728224 | orchestrator | 2025-09-06 00:49:31 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:31.728241 | orchestrator | 2025-09-06 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:34.778298 | orchestrator | 2025-09-06 00:49:34 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:34.778428 | orchestrator | 2025-09-06 00:49:34 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:34.778456 | orchestrator | 2025-09-06 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:37.827193 | orchestrator | 2025-09-06 00:49:37 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:37.829108 | orchestrator | 2025-09-06 00:49:37 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:37.829162 | orchestrator | 2025-09-06 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:40.872119 | orchestrator | 2025-09-06 00:49:40 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:40.873700 | orchestrator | 2025-09-06 00:49:40 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:40.873743 | orchestrator | 2025-09-06 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:43.914143 | orchestrator | 2025-09-06 00:49:43 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:43.915044 | orchestrator | 2025-09-06 00:49:43 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:43.915110 | orchestrator | 2025-09-06 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:46.961616 | orchestrator | 2025-09-06 00:49:46 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:46.963149 | orchestrator | 2025-09-06 00:49:46 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:46.963184 | orchestrator | 2025-09-06 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:50.014660 | orchestrator | 2025-09-06 00:49:50 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:50.017206 | orchestrator | 2025-09-06 00:49:50 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:50.017242 | orchestrator | 2025-09-06 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:53.069231 | orchestrator | 2025-09-06 00:49:53 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:53.069343 | orchestrator | 2025-09-06 00:49:53 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:53.069405 | orchestrator | 2025-09-06 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:56.115270 | orchestrator | 2025-09-06 00:49:56 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:56.115759 | orchestrator | 2025-09-06 00:49:56 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:56.115950 | orchestrator | 2025-09-06 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:49:59.158149 | orchestrator | 2025-09-06 00:49:59 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:49:59.159859 | orchestrator | 2025-09-06 00:49:59 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:49:59.159907 | orchestrator | 2025-09-06 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:02.205460 | orchestrator | 2025-09-06 00:50:02 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:02.206513 | orchestrator | 2025-09-06 00:50:02 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:02.206549 | orchestrator | 2025-09-06 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:05.256906 | orchestrator | 2025-09-06 00:50:05 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:05.259922 | orchestrator | 2025-09-06 00:50:05 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:05.259996 | orchestrator | 2025-09-06 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:08.314348 | orchestrator | 2025-09-06 00:50:08 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:08.316108 | orchestrator | 2025-09-06 00:50:08 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:08.316147 | orchestrator | 2025-09-06 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:11.352967 | orchestrator | 2025-09-06 00:50:11 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:11.356436 | orchestrator | 2025-09-06 00:50:11 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:11.356467 | orchestrator | 2025-09-06 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:14.398287 | orchestrator | 2025-09-06 00:50:14 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:14.398374 | orchestrator | 2025-09-06 00:50:14 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:14.398389 | orchestrator | 2025-09-06 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:17.437190 | orchestrator | 2025-09-06 00:50:17 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:17.439065 | orchestrator | 2025-09-06 00:50:17 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:17.439920 | orchestrator | 2025-09-06 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:20.479959 | orchestrator | 2025-09-06 00:50:20 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:20.480786 | orchestrator | 2025-09-06 00:50:20 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:20.481191 | orchestrator | 2025-09-06 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:23.528842 | orchestrator | 2025-09-06 00:50:23 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:23.529857 | orchestrator | 2025-09-06 00:50:23 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:23.530427 | orchestrator | 2025-09-06 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:26.568170 | orchestrator | 2025-09-06 00:50:26 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:26.569430 | orchestrator | 2025-09-06 00:50:26 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:26.569470 | orchestrator | 2025-09-06 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:29.603852 | orchestrator | 2025-09-06 00:50:29 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:29.605173 | orchestrator | 2025-09-06 00:50:29 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:29.605207 | orchestrator | 2025-09-06 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:32.638841 | orchestrator | 2025-09-06 00:50:32 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:32.640202 | orchestrator | 2025-09-06 00:50:32 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:32.640292 | orchestrator | 2025-09-06 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:35.671760 | orchestrator | 2025-09-06 00:50:35 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:35.671884 | orchestrator | 2025-09-06 00:50:35 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:35.671902 | orchestrator | 2025-09-06 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:38.715290 | orchestrator | 2025-09-06 00:50:38 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:38.715860 | orchestrator | 2025-09-06 00:50:38 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:38.715899 | orchestrator | 2025-09-06 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:41.770340 | orchestrator | 2025-09-06 00:50:41 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:41.770589 | orchestrator | 2025-09-06 00:50:41 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:41.770625 | orchestrator | 2025-09-06 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:44.802401 | orchestrator | 2025-09-06 00:50:44 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:44.803754 | orchestrator | 2025-09-06 00:50:44 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:44.803784 | orchestrator | 2025-09-06 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:47.846611 | orchestrator | 2025-09-06 00:50:47 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:47.846716 | orchestrator | 2025-09-06 00:50:47 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:47.846732 | orchestrator | 2025-09-06 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:50.885722 | orchestrator | 2025-09-06 00:50:50 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:50.887530 | orchestrator | 2025-09-06 00:50:50 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:50.887800 | orchestrator | 2025-09-06 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:53.932674 | orchestrator | 2025-09-06 00:50:53 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:53.933801 | orchestrator | 2025-09-06 00:50:53 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:53.933833 | orchestrator | 2025-09-06 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:50:56.975248 | orchestrator | 2025-09-06 00:50:56 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:50:56.976650 | orchestrator | 2025-09-06 00:50:56 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:50:56.976679 | orchestrator | 2025-09-06 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:00.021533 | orchestrator | 2025-09-06 00:51:00 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:00.022007 | orchestrator | 2025-09-06 00:51:00 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:51:00.022093 | orchestrator | 2025-09-06 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:03.067934 | orchestrator | 2025-09-06 00:51:03 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:03.069563 | orchestrator | 2025-09-06 00:51:03 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:51:03.069590 | orchestrator | 2025-09-06 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:06.124691 | orchestrator | 2025-09-06 00:51:06 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:06.126314 | orchestrator | 2025-09-06 00:51:06 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:51:06.126343 | orchestrator | 2025-09-06 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:09.172634 | orchestrator | 2025-09-06 00:51:09 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:09.172725 | orchestrator | 2025-09-06 00:51:09 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state STARTED 2025-09-06 00:51:09.173352 | orchestrator | 2025-09-06 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:12.207090 | orchestrator | 2025-09-06 00:51:12 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:12.212286 | orchestrator | 2025-09-06 00:51:12 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:12.213591 | orchestrator | 2025-09-06 00:51:12 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:12.218826 | orchestrator | 2025-09-06 00:51:12 | INFO  | Task 1e7ce5c4-5720-4c90-bdd2-c980a6d6360f is in state SUCCESS 2025-09-06 00:51:12.219339 | orchestrator | 2025-09-06 00:51:12.221849 | orchestrator | 2025-09-06 00:51:12.221881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:51:12.221894 | orchestrator | 2025-09-06 00:51:12.221906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:51:12.221917 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.436) 0:00:00.436 **** 2025-09-06 00:51:12.221955 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.221968 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.221979 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.221990 | orchestrator | 2025-09-06 00:51:12.222002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:51:12.222013 | orchestrator | Saturday 06 September 2025 00:44:55 +0000 (0:00:00.487) 0:00:00.923 **** 2025-09-06 00:51:12.222079 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-06 00:51:12.222118 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-06 00:51:12.222130 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-06 00:51:12.222141 | orchestrator | 2025-09-06 00:51:12.222152 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-06 00:51:12.222162 | orchestrator | 2025-09-06 00:51:12.222173 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-06 00:51:12.222184 | orchestrator | Saturday 06 September 2025 00:44:56 +0000 (0:00:00.502) 0:00:01.425 **** 2025-09-06 00:51:12.222210 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.222222 | orchestrator | 2025-09-06 00:51:12.222233 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-06 00:51:12.222243 | orchestrator | Saturday 06 September 2025 00:44:57 +0000 (0:00:00.843) 0:00:02.269 **** 2025-09-06 00:51:12.222254 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.222265 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.222276 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.222288 | orchestrator | 2025-09-06 00:51:12.222299 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-06 00:51:12.222309 | orchestrator | Saturday 06 September 2025 00:44:57 +0000 (0:00:00.722) 0:00:02.991 **** 2025-09-06 00:51:12.222320 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.222331 | orchestrator | 2025-09-06 00:51:12.222342 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-06 00:51:12.222352 | orchestrator | Saturday 06 September 2025 00:44:59 +0000 (0:00:01.371) 0:00:04.363 **** 2025-09-06 00:51:12.222363 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.222374 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.222385 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.222395 | orchestrator | 2025-09-06 00:51:12.222406 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-06 00:51:12.222417 | orchestrator | Saturday 06 September 2025 00:45:00 +0000 (0:00:01.253) 0:00:05.616 **** 2025-09-06 00:51:12.222427 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222463 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-06 00:51:12.222477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222615 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-06 00:51:12.222630 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-06 00:51:12.222711 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-06 00:51:12.222724 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-06 00:51:12.222735 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-06 00:51:12.222746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-06 00:51:12.222756 | orchestrator | 2025-09-06 00:51:12.222767 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-06 00:51:12.222778 | orchestrator | Saturday 06 September 2025 00:45:04 +0000 (0:00:04.393) 0:00:10.010 **** 2025-09-06 00:51:12.222789 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-06 00:51:12.222800 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-06 00:51:12.222824 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-06 00:51:12.222835 | orchestrator | 2025-09-06 00:51:12.222847 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-06 00:51:12.222858 | orchestrator | Saturday 06 September 2025 00:45:06 +0000 (0:00:01.141) 0:00:11.151 **** 2025-09-06 00:51:12.222868 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-06 00:51:12.222879 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-06 00:51:12.222890 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-06 00:51:12.222901 | orchestrator | 2025-09-06 00:51:12.222912 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-06 00:51:12.222941 | orchestrator | Saturday 06 September 2025 00:45:08 +0000 (0:00:01.905) 0:00:13.057 **** 2025-09-06 00:51:12.222953 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-06 00:51:12.222964 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.222988 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-06 00:51:12.223000 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.223010 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-06 00:51:12.223021 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.223032 | orchestrator | 2025-09-06 00:51:12.223042 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-06 00:51:12.223053 | orchestrator | Saturday 06 September 2025 00:45:08 +0000 (0:00:00.514) 0:00:13.571 **** 2025-09-06 00:51:12.223068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.223168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.223185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.223197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.223208 | orchestrator | 2025-09-06 00:51:12.223219 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-06 00:51:12.223230 | orchestrator | Saturday 06 September 2025 00:45:10 +0000 (0:00:02.157) 0:00:15.728 **** 2025-09-06 00:51:12.223241 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.223252 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.223263 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.223273 | orchestrator | 2025-09-06 00:51:12.224568 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-06 00:51:12.224587 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:01.464) 0:00:17.193 **** 2025-09-06 00:51:12.224598 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-06 00:51:12.224610 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-06 00:51:12.224620 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-06 00:51:12.224632 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-06 00:51:12.224760 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-06 00:51:12.224776 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-06 00:51:12.224861 | orchestrator | 2025-09-06 00:51:12.224874 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-06 00:51:12.224885 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:01.866) 0:00:19.059 **** 2025-09-06 00:51:12.224896 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.224907 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.225035 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.225053 | orchestrator | 2025-09-06 00:51:12.225064 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-06 00:51:12.225075 | orchestrator | Saturday 06 September 2025 00:45:16 +0000 (0:00:02.087) 0:00:21.147 **** 2025-09-06 00:51:12.225845 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.225866 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.225877 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.226436 | orchestrator | 2025-09-06 00:51:12.226559 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-06 00:51:12.226576 | orchestrator | Saturday 06 September 2025 00:45:18 +0000 (0:00:01.920) 0:00:23.068 **** 2025-09-06 00:51:12.226590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.227636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.227661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.227736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.227809 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.227821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.227846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.227857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.227869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.228347 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.228381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.228397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.228408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.228428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.228439 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.228448 | orchestrator | 2025-09-06 00:51:12.228458 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-06 00:51:12.228468 | orchestrator | Saturday 06 September 2025 00:45:18 +0000 (0:00:00.726) 0:00:23.794 **** 2025-09-06 00:51:12.228478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.228552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.228562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.228678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.228696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.228728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c', '__omit_place_holder__17cfb4e470673b6585af391c802b58a84d4cc90c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-06 00:51:12.228739 | orchestrator | 2025-09-06 00:51:12.228787 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-06 00:51:12.228797 | orchestrator | Saturday 06 September 2025 00:45:21 +0000 (0:00:02.770) 0:00:26.565 **** 2025-09-06 00:51:12.228807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.228915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.228945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.228956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.228967 | orchestrator | 2025-09-06 00:51:12.228979 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-06 00:51:12.228990 | orchestrator | Saturday 06 September 2025 00:45:24 +0000 (0:00:03.122) 0:00:29.688 **** 2025-09-06 00:51:12.229001 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-06 00:51:12.229024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-06 00:51:12.229035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-06 00:51:12.229047 | orchestrator | 2025-09-06 00:51:12.229058 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-06 00:51:12.229070 | orchestrator | Saturday 06 September 2025 00:45:28 +0000 (0:00:03.923) 0:00:33.611 **** 2025-09-06 00:51:12.229103 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-06 00:51:12.229115 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-06 00:51:12.229126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-06 00:51:12.229137 | orchestrator | 2025-09-06 00:51:12.229174 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-06 00:51:12.229194 | orchestrator | Saturday 06 September 2025 00:45:35 +0000 (0:00:06.546) 0:00:40.158 **** 2025-09-06 00:51:12.229266 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.229278 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.229290 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.229301 | orchestrator | 2025-09-06 00:51:12.229312 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-06 00:51:12.229356 | orchestrator | Saturday 06 September 2025 00:45:35 +0000 (0:00:00.827) 0:00:40.985 **** 2025-09-06 00:51:12.229367 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-06 00:51:12.229377 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-06 00:51:12.229387 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-06 00:51:12.229397 | orchestrator | 2025-09-06 00:51:12.229406 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-06 00:51:12.229420 | orchestrator | Saturday 06 September 2025 00:45:39 +0000 (0:00:03.146) 0:00:44.131 **** 2025-09-06 00:51:12.229430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-06 00:51:12.229439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-06 00:51:12.229449 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-06 00:51:12.229459 | orchestrator | 2025-09-06 00:51:12.229468 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-06 00:51:12.229478 | orchestrator | Saturday 06 September 2025 00:45:42 +0000 (0:00:03.448) 0:00:47.580 **** 2025-09-06 00:51:12.229487 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-06 00:51:12.229497 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-06 00:51:12.229507 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-06 00:51:12.229516 | orchestrator | 2025-09-06 00:51:12.229526 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-06 00:51:12.229535 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:01.763) 0:00:49.343 **** 2025-09-06 00:51:12.229545 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-06 00:51:12.229555 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-06 00:51:12.229564 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-06 00:51:12.229574 | orchestrator | 2025-09-06 00:51:12.229583 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-06 00:51:12.229593 | orchestrator | Saturday 06 September 2025 00:45:46 +0000 (0:00:01.883) 0:00:51.226 **** 2025-09-06 00:51:12.229602 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.229612 | orchestrator | 2025-09-06 00:51:12.229621 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-06 00:51:12.229631 | orchestrator | Saturday 06 September 2025 00:45:46 +0000 (0:00:00.502) 0:00:51.729 **** 2025-09-06 00:51:12.229641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.229720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.229731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.229746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.229756 | orchestrator | 2025-09-06 00:51:12.229766 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-06 00:51:12.229776 | orchestrator | Saturday 06 September 2025 00:45:50 +0000 (0:00:03.543) 0:00:55.273 **** 2025-09-06 00:51:12.229793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.229883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.229897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.229907 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.229917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.229941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.232981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.232996 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233107 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233117 | orchestrator | 2025-09-06 00:51:12.233125 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-06 00:51:12.233134 | orchestrator | Saturday 06 September 2025 00:45:50 +0000 (0:00:00.570) 0:00:55.844 **** 2025-09-06 00:51:12.233143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233173 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.233181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233213 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233279 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233287 | orchestrator | 2025-09-06 00:51:12.233295 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-06 00:51:12.233303 | orchestrator | Saturday 06 September 2025 00:45:51 +0000 (0:00:00.866) 0:00:56.710 **** 2025-09-06 00:51:12.233311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233343 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.233351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233381 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233438 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233446 | orchestrator | 2025-09-06 00:51:12.233453 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-06 00:51:12.233461 | orchestrator | Saturday 06 September 2025 00:45:52 +0000 (0:00:00.743) 0:00:57.454 **** 2025-09-06 00:51:12.233469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233501 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.233509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233534 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233582 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233591 | orchestrator | 2025-09-06 00:51:12.233600 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-06 00:51:12.233609 | orchestrator | Saturday 06 September 2025 00:45:52 +0000 (0:00:00.511) 0:00:57.966 **** 2025-09-06 00:51:12.233618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233663 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.233672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233698 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233735 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233745 | orchestrator | 2025-09-06 00:51:12.233754 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-06 00:51:12.233763 | orchestrator | Saturday 06 September 2025 00:45:53 +0000 (0:00:00.926) 0:00:58.892 **** 2025-09-06 00:51:12.233772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233817 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.233827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233856 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.233865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.233879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.233889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.233903 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.233912 | orchestrator | 2025-09-06 00:51:12.233921 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-06 00:51:12.233980 | orchestrator | Saturday 06 September 2025 00:45:55 +0000 (0:00:01.552) 0:01:00.444 **** 2025-09-06 00:51:12.233992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234070 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.234084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234095 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.234102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234126 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.234133 | orchestrator | 2025-09-06 00:51:12.234139 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-06 00:51:12.234146 | orchestrator | Saturday 06 September 2025 00:45:56 +0000 (0:00:01.590) 0:01:02.035 **** 2025-09-06 00:51:12.234153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234177 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.234188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234213 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.234220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-06 00:51:12.234227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-06 00:51:12.234234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-06 00:51:12.234245 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.234251 | orchestrator | 2025-09-06 00:51:12.234258 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-06 00:51:12.234265 | orchestrator | Saturday 06 September 2025 00:45:57 +0000 (0:00:00.871) 0:01:02.907 **** 2025-09-06 00:51:12.234271 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-06 00:51:12.234279 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-06 00:51:12.234290 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-06 00:51:12.234297 | orchestrator | 2025-09-06 00:51:12.234303 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-06 00:51:12.234310 | orchestrator | Saturday 06 September 2025 00:45:59 +0000 (0:00:01.876) 0:01:04.783 **** 2025-09-06 00:51:12.234317 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-06 00:51:12.234324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-06 00:51:12.234330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-06 00:51:12.234337 | orchestrator | 2025-09-06 00:51:12.234343 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-06 00:51:12.234350 | orchestrator | Saturday 06 September 2025 00:46:01 +0000 (0:00:01.508) 0:01:06.291 **** 2025-09-06 00:51:12.234356 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 00:51:12.234363 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 00:51:12.234372 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 00:51:12.234379 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.234386 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 00:51:12.234392 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 00:51:12.234399 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.234406 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 00:51:12.234412 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.234419 | orchestrator | 2025-09-06 00:51:12.234425 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-06 00:51:12.234432 | orchestrator | Saturday 06 September 2025 00:46:02 +0000 (0:00:00.875) 0:01:07.167 **** 2025-09-06 00:51:12.234439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-06 00:51:12.234492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.234499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.234506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-06 00:51:12.234516 | orchestrator | 2025-09-06 00:51:12.234523 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-06 00:51:12.234530 | orchestrator | Saturday 06 September 2025 00:46:04 +0000 (0:00:02.568) 0:01:09.736 **** 2025-09-06 00:51:12.234536 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.234543 | orchestrator | 2025-09-06 00:51:12.234549 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-06 00:51:12.234556 | orchestrator | Saturday 06 September 2025 00:46:05 +0000 (0:00:00.601) 0:01:10.337 **** 2025-09-06 00:51:12.234564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-06 00:51:12.234576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-06 00:51:12.234611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-06 00:51:12.234654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234681 | orchestrator | 2025-09-06 00:51:12.234688 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-06 00:51:12.234695 | orchestrator | Saturday 06 September 2025 00:46:09 +0000 (0:00:04.587) 0:01:14.924 **** 2025-09-06 00:51:12.234702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-06 00:51:12.234714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-06 00:51:12.234724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234769 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.234776 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.234786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-06 00:51:12.234796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.234803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.234820 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.234827 | orchestrator | 2025-09-06 00:51:12.234834 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-06 00:51:12.234866 | orchestrator | Saturday 06 September 2025 00:46:10 +0000 (0:00:00.956) 0:01:15.880 **** 2025-09-06 00:51:12.234875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.234942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.234952 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.234958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.234995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.235003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.235010 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-06 00:51:12.235023 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235030 | orchestrator | 2025-09-06 00:51:12.235042 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-06 00:51:12.235049 | orchestrator | Saturday 06 September 2025 00:46:11 +0000 (0:00:00.768) 0:01:16.649 **** 2025-09-06 00:51:12.235056 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.235062 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.235069 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.235075 | orchestrator | 2025-09-06 00:51:12.235082 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-06 00:51:12.235088 | orchestrator | Saturday 06 September 2025 00:46:12 +0000 (0:00:01.344) 0:01:17.993 **** 2025-09-06 00:51:12.235095 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.235102 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.235108 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.235114 | orchestrator | 2025-09-06 00:51:12.235121 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-06 00:51:12.235128 | orchestrator | Saturday 06 September 2025 00:46:14 +0000 (0:00:01.896) 0:01:19.890 **** 2025-09-06 00:51:12.235139 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.235146 | orchestrator | 2025-09-06 00:51:12.235152 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-06 00:51:12.235159 | orchestrator | Saturday 06 September 2025 00:46:15 +0000 (0:00:00.699) 0:01:20.589 **** 2025-09-06 00:51:12.235169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.235178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.235192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.235233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235247 | orchestrator | 2025-09-06 00:51:12.235254 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-06 00:51:12.235261 | orchestrator | Saturday 06 September 2025 00:46:19 +0000 (0:00:03.556) 0:01:24.145 **** 2025-09-06 00:51:12.235272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.235287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235301 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.235315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235329 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.235354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.235368 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235375 | orchestrator | 2025-09-06 00:51:12.235381 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-06 00:51:12.235388 | orchestrator | Saturday 06 September 2025 00:46:19 +0000 (0:00:00.725) 0:01:24.871 **** 2025-09-06 00:51:12.235410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235425 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235445 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-06 00:51:12.235465 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235476 | orchestrator | 2025-09-06 00:51:12.235483 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-06 00:51:12.235489 | orchestrator | Saturday 06 September 2025 00:46:22 +0000 (0:00:02.375) 0:01:27.247 **** 2025-09-06 00:51:12.235496 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.235503 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.235509 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.235516 | orchestrator | 2025-09-06 00:51:12.235522 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-06 00:51:12.235529 | orchestrator | Saturday 06 September 2025 00:46:23 +0000 (0:00:01.453) 0:01:28.700 **** 2025-09-06 00:51:12.235535 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.235542 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.235548 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.235555 | orchestrator | 2025-09-06 00:51:12.235566 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-06 00:51:12.235573 | orchestrator | Saturday 06 September 2025 00:46:25 +0000 (0:00:01.840) 0:01:30.540 **** 2025-09-06 00:51:12.235580 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235586 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235593 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235599 | orchestrator | 2025-09-06 00:51:12.235606 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-06 00:51:12.235612 | orchestrator | Saturday 06 September 2025 00:46:25 +0000 (0:00:00.277) 0:01:30.818 **** 2025-09-06 00:51:12.235619 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.235625 | orchestrator | 2025-09-06 00:51:12.235632 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-06 00:51:12.235639 | orchestrator | Saturday 06 September 2025 00:46:26 +0000 (0:00:00.747) 0:01:31.566 **** 2025-09-06 00:51:12.235649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-06 00:51:12.235656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-06 00:51:12.235664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-06 00:51:12.235677 | orchestrator | 2025-09-06 00:51:12.235684 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-06 00:51:12.235691 | orchestrator | Saturday 06 September 2025 00:46:28 +0000 (0:00:02.328) 0:01:33.894 **** 2025-09-06 00:51:12.235702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-06 00:51:12.235709 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-06 00:51:12.235723 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-06 00:51:12.235737 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235743 | orchestrator | 2025-09-06 00:51:12.235750 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-06 00:51:12.235757 | orchestrator | Saturday 06 September 2025 00:46:30 +0000 (0:00:01.467) 0:01:35.361 **** 2025-09-06 00:51:12.235764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235784 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235816 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-06 00:51:12.235842 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235848 | orchestrator | 2025-09-06 00:51:12.235855 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-06 00:51:12.235861 | orchestrator | Saturday 06 September 2025 00:46:32 +0000 (0:00:02.000) 0:01:37.361 **** 2025-09-06 00:51:12.235868 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235875 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235881 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235888 | orchestrator | 2025-09-06 00:51:12.235894 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-06 00:51:12.235904 | orchestrator | Saturday 06 September 2025 00:46:33 +0000 (0:00:00.750) 0:01:38.112 **** 2025-09-06 00:51:12.235911 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.235917 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.235937 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.235944 | orchestrator | 2025-09-06 00:51:12.235950 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-06 00:51:12.235957 | orchestrator | Saturday 06 September 2025 00:46:34 +0000 (0:00:01.254) 0:01:39.367 **** 2025-09-06 00:51:12.235964 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.235970 | orchestrator | 2025-09-06 00:51:12.235976 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-06 00:51:12.235983 | orchestrator | Saturday 06 September 2025 00:46:35 +0000 (0:00:00.709) 0:01:40.077 **** 2025-09-06 00:51:12.235990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.236002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.236021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.236078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236107 | orchestrator | 2025-09-06 00:51:12.236113 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-06 00:51:12.236120 | orchestrator | Saturday 06 September 2025 00:46:39 +0000 (0:00:04.136) 0:01:44.214 **** 2025-09-06 00:51:12.236127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.236134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236167 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.236174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.236181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236206 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.236215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.236227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236247 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.236254 | orchestrator | 2025-09-06 00:51:12.236261 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-06 00:51:12.236267 | orchestrator | Saturday 06 September 2025 00:46:39 +0000 (0:00:00.798) 0:01:45.012 **** 2025-09-06 00:51:12.236274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236297 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.236304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236323 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.236330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-06 00:51:12.236346 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.236353 | orchestrator | 2025-09-06 00:51:12.236359 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-06 00:51:12.236366 | orchestrator | Saturday 06 September 2025 00:46:41 +0000 (0:00:01.031) 0:01:46.044 **** 2025-09-06 00:51:12.236372 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.236379 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.236385 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.236392 | orchestrator | 2025-09-06 00:51:12.236398 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-06 00:51:12.236405 | orchestrator | Saturday 06 September 2025 00:46:42 +0000 (0:00:01.192) 0:01:47.236 **** 2025-09-06 00:51:12.236411 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.236418 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.236424 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.236431 | orchestrator | 2025-09-06 00:51:12.236437 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-06 00:51:12.236444 | orchestrator | Saturday 06 September 2025 00:46:44 +0000 (0:00:02.205) 0:01:49.441 **** 2025-09-06 00:51:12.236450 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.236457 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.236463 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.236470 | orchestrator | 2025-09-06 00:51:12.236476 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-06 00:51:12.236483 | orchestrator | Saturday 06 September 2025 00:46:44 +0000 (0:00:00.558) 0:01:50.000 **** 2025-09-06 00:51:12.236489 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.236496 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.236502 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.236509 | orchestrator | 2025-09-06 00:51:12.236515 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-06 00:51:12.236522 | orchestrator | Saturday 06 September 2025 00:46:45 +0000 (0:00:00.489) 0:01:50.489 **** 2025-09-06 00:51:12.236528 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.236535 | orchestrator | 2025-09-06 00:51:12.236541 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-06 00:51:12.236548 | orchestrator | Saturday 06 September 2025 00:46:46 +0000 (0:00:00.881) 0:01:51.370 **** 2025-09-06 00:51:12.236554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 00:51:12.236565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 00:51:12.236633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 00:51:12.236693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236742 | orchestrator | 2025-09-06 00:51:12.236749 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-06 00:51:12.236756 | orchestrator | Saturday 06 September 2025 00:46:50 +0000 (0:00:04.539) 0:01:55.910 **** 2025-09-06 00:51:12.236767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 00:51:12.236777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 00:51:12.236829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 00:51:12.236844 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.236851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 00:51:12.236869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236972 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.236984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.236994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.237001 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237008 | orchestrator | 2025-09-06 00:51:12.237014 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-06 00:51:12.237021 | orchestrator | Saturday 06 September 2025 00:46:51 +0000 (0:00:00.874) 0:01:56.784 **** 2025-09-06 00:51:12.237028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237041 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237066 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-06 00:51:12.237085 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237092 | orchestrator | 2025-09-06 00:51:12.237099 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-06 00:51:12.237105 | orchestrator | Saturday 06 September 2025 00:46:52 +0000 (0:00:01.023) 0:01:57.808 **** 2025-09-06 00:51:12.237111 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237117 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237123 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237129 | orchestrator | 2025-09-06 00:51:12.237135 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-06 00:51:12.237141 | orchestrator | Saturday 06 September 2025 00:46:54 +0000 (0:00:01.933) 0:01:59.742 **** 2025-09-06 00:51:12.237147 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237153 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237159 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237165 | orchestrator | 2025-09-06 00:51:12.237171 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-06 00:51:12.237177 | orchestrator | Saturday 06 September 2025 00:46:56 +0000 (0:00:01.914) 0:02:01.656 **** 2025-09-06 00:51:12.237183 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237189 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237195 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237201 | orchestrator | 2025-09-06 00:51:12.237207 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-06 00:51:12.237213 | orchestrator | Saturday 06 September 2025 00:46:57 +0000 (0:00:00.516) 0:02:02.173 **** 2025-09-06 00:51:12.237219 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.237225 | orchestrator | 2025-09-06 00:51:12.237231 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-06 00:51:12.237237 | orchestrator | Saturday 06 September 2025 00:46:57 +0000 (0:00:00.788) 0:02:02.962 **** 2025-09-06 00:51:12.237265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:51:12.237278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:51:12.237301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:51:12.237327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237338 | orchestrator | 2025-09-06 00:51:12.237344 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-06 00:51:12.237350 | orchestrator | Saturday 06 September 2025 00:47:02 +0000 (0:00:04.122) 0:02:07.084 **** 2025-09-06 00:51:12.237360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:51:12.237371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237381 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:51:12.237403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237414 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:51:12.237435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.237446 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237452 | orchestrator | 2025-09-06 00:51:12.237459 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-06 00:51:12.237465 | orchestrator | Saturday 06 September 2025 00:47:05 +0000 (0:00:03.023) 0:02:10.108 **** 2025-09-06 00:51:12.237472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237485 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237505 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-06 00:51:12.237531 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237537 | orchestrator | 2025-09-06 00:51:12.237544 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-06 00:51:12.237550 | orchestrator | Saturday 06 September 2025 00:47:08 +0000 (0:00:03.234) 0:02:13.343 **** 2025-09-06 00:51:12.237556 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237562 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237568 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237574 | orchestrator | 2025-09-06 00:51:12.237580 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-06 00:51:12.237586 | orchestrator | Saturday 06 September 2025 00:47:09 +0000 (0:00:01.265) 0:02:14.608 **** 2025-09-06 00:51:12.237592 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237598 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237604 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237625 | orchestrator | 2025-09-06 00:51:12.237635 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-06 00:51:12.237641 | orchestrator | Saturday 06 September 2025 00:47:11 +0000 (0:00:02.081) 0:02:16.689 **** 2025-09-06 00:51:12.237647 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237654 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237660 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237666 | orchestrator | 2025-09-06 00:51:12.237672 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-06 00:51:12.237678 | orchestrator | Saturday 06 September 2025 00:47:12 +0000 (0:00:00.528) 0:02:17.218 **** 2025-09-06 00:51:12.237684 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.237691 | orchestrator | 2025-09-06 00:51:12.237697 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-06 00:51:12.237703 | orchestrator | Saturday 06 September 2025 00:47:12 +0000 (0:00:00.812) 0:02:18.031 **** 2025-09-06 00:51:12.237709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 00:51:12.237716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 00:51:12.237723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 00:51:12.237733 | orchestrator | 2025-09-06 00:51:12.237739 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-06 00:51:12.237745 | orchestrator | Saturday 06 September 2025 00:47:16 +0000 (0:00:03.213) 0:02:21.245 **** 2025-09-06 00:51:12.237756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 00:51:12.237768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 00:51:12.237775 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237781 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 00:51:12.237793 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237800 | orchestrator | 2025-09-06 00:51:12.237806 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-06 00:51:12.237812 | orchestrator | Saturday 06 September 2025 00:47:16 +0000 (0:00:00.663) 0:02:21.908 **** 2025-09-06 00:51:12.237818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237830 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237849 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-06 00:51:12.237871 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.237877 | orchestrator | 2025-09-06 00:51:12.237883 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-06 00:51:12.237889 | orchestrator | Saturday 06 September 2025 00:47:17 +0000 (0:00:00.669) 0:02:22.578 **** 2025-09-06 00:51:12.237895 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237901 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237907 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237913 | orchestrator | 2025-09-06 00:51:12.237920 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-06 00:51:12.237936 | orchestrator | Saturday 06 September 2025 00:47:18 +0000 (0:00:01.282) 0:02:23.861 **** 2025-09-06 00:51:12.237943 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.237949 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.237955 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.237961 | orchestrator | 2025-09-06 00:51:12.237967 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-06 00:51:12.237973 | orchestrator | Saturday 06 September 2025 00:47:20 +0000 (0:00:02.100) 0:02:25.961 **** 2025-09-06 00:51:12.237979 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.237985 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.237995 | orchestrator | skipp2025-09-06 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:12.238002 | orchestrator | ing: [testbed-node-2] 2025-09-06 00:51:12.238009 | orchestrator | 2025-09-06 00:51:12.238034 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-06 00:51:12.238042 | orchestrator | Saturday 06 September 2025 00:47:21 +0000 (0:00:00.509) 0:02:26.471 **** 2025-09-06 00:51:12.238049 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.238055 | orchestrator | 2025-09-06 00:51:12.238061 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-06 00:51:12.238067 | orchestrator | Saturday 06 September 2025 00:47:22 +0000 (0:00:00.836) 0:02:27.308 **** 2025-09-06 00:51:12.238084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:51:12.238123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:51:12.238133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:51:12.238145 | orchestrator | 2025-09-06 00:51:12.238151 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-06 00:51:12.238158 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:03.910) 0:02:31.219 **** 2025-09-06 00:51:12.238173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:51:12.238181 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:51:12.238200 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:51:12.238223 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238229 | orchestrator | 2025-09-06 00:51:12.238235 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-06 00:51:12.238241 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:01.072) 0:02:32.291 **** 2025-09-06 00:51:12.238248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-06 00:51:12.238315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238321 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-06 00:51:12.238349 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-06 00:51:12.238366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-06 00:51:12.238373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-06 00:51:12.238379 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238385 | orchestrator | 2025-09-06 00:51:12.238391 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-06 00:51:12.238397 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.885) 0:02:33.177 **** 2025-09-06 00:51:12.238403 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.238410 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.238416 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.238422 | orchestrator | 2025-09-06 00:51:12.238428 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-06 00:51:12.238434 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:01.291) 0:02:34.468 **** 2025-09-06 00:51:12.238440 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.238446 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.238452 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.238458 | orchestrator | 2025-09-06 00:51:12.238464 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-06 00:51:12.238471 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:01.881) 0:02:36.350 **** 2025-09-06 00:51:12.238477 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238483 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238489 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238495 | orchestrator | 2025-09-06 00:51:12.238501 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-06 00:51:12.238507 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.266) 0:02:36.616 **** 2025-09-06 00:51:12.238513 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238519 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238525 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238531 | orchestrator | 2025-09-06 00:51:12.238538 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-06 00:51:12.238544 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.418) 0:02:37.035 **** 2025-09-06 00:51:12.238550 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.238556 | orchestrator | 2025-09-06 00:51:12.238562 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-06 00:51:12.238568 | orchestrator | Saturday 06 September 2025 00:47:32 +0000 (0:00:00.912) 0:02:37.947 **** 2025-09-06 00:51:12.238580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:51:12.238594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:51:12.238609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:51:12.238649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238662 | orchestrator | 2025-09-06 00:51:12.238679 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-06 00:51:12.238686 | orchestrator | Saturday 06 September 2025 00:47:36 +0000 (0:00:03.351) 0:02:41.299 **** 2025-09-06 00:51:12.238692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:51:12.238699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238721 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:51:12.238738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238751 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:51:12.238773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:51:12.238784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:51:12.238790 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238797 | orchestrator | 2025-09-06 00:51:12.238803 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-06 00:51:12.238809 | orchestrator | Saturday 06 September 2025 00:47:37 +0000 (0:00:00.770) 0:02:42.070 **** 2025-09-06 00:51:12.238815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238829 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.238835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238848 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.238854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-06 00:51:12.238867 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.238873 | orchestrator | 2025-09-06 00:51:12.238879 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-06 00:51:12.238885 | orchestrator | Saturday 06 September 2025 00:47:37 +0000 (0:00:00.857) 0:02:42.927 **** 2025-09-06 00:51:12.238891 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.238897 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.238903 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.238913 | orchestrator | 2025-09-06 00:51:12.238920 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-06 00:51:12.238958 | orchestrator | Saturday 06 September 2025 00:47:39 +0000 (0:00:01.190) 0:02:44.118 **** 2025-09-06 00:51:12.238965 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.238971 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.238977 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.238983 | orchestrator | 2025-09-06 00:51:12.238989 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-06 00:51:12.238995 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:01.936) 0:02:46.054 **** 2025-09-06 00:51:12.239001 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239008 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239014 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239020 | orchestrator | 2025-09-06 00:51:12.239026 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-06 00:51:12.239032 | orchestrator | Saturday 06 September 2025 00:47:41 +0000 (0:00:00.471) 0:02:46.526 **** 2025-09-06 00:51:12.239038 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.239044 | orchestrator | 2025-09-06 00:51:12.239050 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-06 00:51:12.239060 | orchestrator | Saturday 06 September 2025 00:47:42 +0000 (0:00:00.898) 0:02:47.424 **** 2025-09-06 00:51:12.239067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 00:51:12.239076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 00:51:12.239097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 00:51:12.239112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239118 | orchestrator | 2025-09-06 00:51:12.239124 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-06 00:51:12.239133 | orchestrator | Saturday 06 September 2025 00:47:45 +0000 (0:00:03.185) 0:02:50.610 **** 2025-09-06 00:51:12.239139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 00:51:12.239144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239154 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 00:51:12.239169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239175 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 00:51:12.239190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239195 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239201 | orchestrator | 2025-09-06 00:51:12.239206 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-06 00:51:12.239217 | orchestrator | Saturday 06 September 2025 00:47:46 +0000 (0:00:01.059) 0:02:51.670 **** 2025-09-06 00:51:12.239223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239234 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239250 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-06 00:51:12.239266 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239271 | orchestrator | 2025-09-06 00:51:12.239277 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-06 00:51:12.239282 | orchestrator | Saturday 06 September 2025 00:47:47 +0000 (0:00:00.889) 0:02:52.559 **** 2025-09-06 00:51:12.239287 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.239293 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.239298 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.239303 | orchestrator | 2025-09-06 00:51:12.239309 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-06 00:51:12.239314 | orchestrator | Saturday 06 September 2025 00:47:48 +0000 (0:00:01.334) 0:02:53.893 **** 2025-09-06 00:51:12.239319 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.239325 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.239330 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.239335 | orchestrator | 2025-09-06 00:51:12.239340 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-06 00:51:12.239349 | orchestrator | Saturday 06 September 2025 00:47:51 +0000 (0:00:02.244) 0:02:56.137 **** 2025-09-06 00:51:12.239355 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.239360 | orchestrator | 2025-09-06 00:51:12.239366 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-06 00:51:12.239371 | orchestrator | Saturday 06 September 2025 00:47:52 +0000 (0:00:01.342) 0:02:57.480 **** 2025-09-06 00:51:12.239380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-06 00:51:12.239386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-06 00:51:12.239416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-06 00:51:12.239442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239468 | orchestrator | 2025-09-06 00:51:12.239474 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-06 00:51:12.239479 | orchestrator | Saturday 06 September 2025 00:47:56 +0000 (0:00:03.600) 0:03:01.081 **** 2025-09-06 00:51:12.239488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-06 00:51:12.239497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239514 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-06 00:51:12.239529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239552 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-06 00:51:12.239564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.239584 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239589 | orchestrator | 2025-09-06 00:51:12.239598 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-06 00:51:12.239604 | orchestrator | Saturday 06 September 2025 00:47:56 +0000 (0:00:00.670) 0:03:01.751 **** 2025-09-06 00:51:12.239609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239620 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239639 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-06 00:51:12.239656 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239661 | orchestrator | 2025-09-06 00:51:12.239666 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-06 00:51:12.239672 | orchestrator | Saturday 06 September 2025 00:47:58 +0000 (0:00:01.383) 0:03:03.135 **** 2025-09-06 00:51:12.239677 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.239682 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.239688 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.239693 | orchestrator | 2025-09-06 00:51:12.239698 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-06 00:51:12.239704 | orchestrator | Saturday 06 September 2025 00:47:59 +0000 (0:00:01.331) 0:03:04.466 **** 2025-09-06 00:51:12.239709 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.239714 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.239720 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.239725 | orchestrator | 2025-09-06 00:51:12.239730 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-06 00:51:12.239736 | orchestrator | Saturday 06 September 2025 00:48:01 +0000 (0:00:02.082) 0:03:06.549 **** 2025-09-06 00:51:12.239741 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.239746 | orchestrator | 2025-09-06 00:51:12.239752 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-06 00:51:12.239757 | orchestrator | Saturday 06 September 2025 00:48:02 +0000 (0:00:01.266) 0:03:07.815 **** 2025-09-06 00:51:12.239763 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-06 00:51:12.239768 | orchestrator | 2025-09-06 00:51:12.239773 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-06 00:51:12.239779 | orchestrator | Saturday 06 September 2025 00:48:05 +0000 (0:00:02.947) 0:03:10.763 **** 2025-09-06 00:51:12.239788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.239803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.239809 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.239815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.239821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.239830 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.239843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.239850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.239855 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.239861 | orchestrator | 2025-09-06 00:51:12.239866 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-06 00:51:12.239871 | orchestrator | Saturday 06 September 2025 00:48:08 +0000 (0:00:02.387) 0:03:13.150 **** 2025-09-06 00:51:12.239880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.239891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.239897 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.240028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.240040 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:51:12.240055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-06 00:51:12.240061 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240066 | orchestrator | 2025-09-06 00:51:12.240071 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-06 00:51:12.240077 | orchestrator | Saturday 06 September 2025 00:48:10 +0000 (0:00:02.433) 0:03:15.583 **** 2025-09-06 00:51:12.240087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240105 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240122 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-06 00:51:12.240142 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240148 | orchestrator | 2025-09-06 00:51:12.240153 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-06 00:51:12.240158 | orchestrator | Saturday 06 September 2025 00:48:13 +0000 (0:00:03.200) 0:03:18.784 **** 2025-09-06 00:51:12.240164 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.240169 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.240174 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.240180 | orchestrator | 2025-09-06 00:51:12.240185 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-06 00:51:12.240190 | orchestrator | Saturday 06 September 2025 00:48:15 +0000 (0:00:01.669) 0:03:20.453 **** 2025-09-06 00:51:12.240196 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240201 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240206 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240211 | orchestrator | 2025-09-06 00:51:12.240217 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-06 00:51:12.240222 | orchestrator | Saturday 06 September 2025 00:48:16 +0000 (0:00:01.243) 0:03:21.696 **** 2025-09-06 00:51:12.240230 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240240 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240245 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240251 | orchestrator | 2025-09-06 00:51:12.240256 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-06 00:51:12.240261 | orchestrator | Saturday 06 September 2025 00:48:16 +0000 (0:00:00.300) 0:03:21.997 **** 2025-09-06 00:51:12.240267 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.240272 | orchestrator | 2025-09-06 00:51:12.240277 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-06 00:51:12.240283 | orchestrator | Saturday 06 September 2025 00:48:18 +0000 (0:00:01.247) 0:03:23.245 **** 2025-09-06 00:51:12.240288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-06 00:51:12.240295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-06 00:51:12.240300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-06 00:51:12.240306 | orchestrator | 2025-09-06 00:51:12.240311 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-06 00:51:12.240317 | orchestrator | Saturday 06 September 2025 00:48:19 +0000 (0:00:01.397) 0:03:24.642 **** 2025-09-06 00:51:12.240325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-06 00:51:12.240338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-06 00:51:12.240344 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240350 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-06 00:51:12.240361 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240366 | orchestrator | 2025-09-06 00:51:12.240372 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-06 00:51:12.240377 | orchestrator | Saturday 06 September 2025 00:48:19 +0000 (0:00:00.392) 0:03:25.035 **** 2025-09-06 00:51:12.240383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-06 00:51:12.240389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-06 00:51:12.240394 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240400 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-06 00:51:12.240411 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240416 | orchestrator | 2025-09-06 00:51:12.240422 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-06 00:51:12.240427 | orchestrator | Saturday 06 September 2025 00:48:20 +0000 (0:00:00.831) 0:03:25.867 **** 2025-09-06 00:51:12.240433 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240438 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240443 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240448 | orchestrator | 2025-09-06 00:51:12.240454 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-06 00:51:12.240459 | orchestrator | Saturday 06 September 2025 00:48:21 +0000 (0:00:00.453) 0:03:26.321 **** 2025-09-06 00:51:12.240464 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240470 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240475 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240480 | orchestrator | 2025-09-06 00:51:12.240492 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-06 00:51:12.240497 | orchestrator | Saturday 06 September 2025 00:48:22 +0000 (0:00:01.230) 0:03:27.551 **** 2025-09-06 00:51:12.240502 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.240510 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.240516 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.240521 | orchestrator | 2025-09-06 00:51:12.240526 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-06 00:51:12.240532 | orchestrator | Saturday 06 September 2025 00:48:22 +0000 (0:00:00.323) 0:03:27.874 **** 2025-09-06 00:51:12.240537 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.240542 | orchestrator | 2025-09-06 00:51:12.240548 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-06 00:51:12.240553 | orchestrator | Saturday 06 September 2025 00:48:24 +0000 (0:00:01.425) 0:03:29.300 **** 2025-09-06 00:51:12.240562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 00:51:12.240569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 00:51:12.240575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.240641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.240648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.240800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.240826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 00:51:12.240844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.240877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.240953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.240965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.240974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.240981 | orchestrator | 2025-09-06 00:51:12.240987 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-06 00:51:12.240992 | orchestrator | Saturday 06 September 2025 00:48:28 +0000 (0:00:04.274) 0:03:33.575 **** 2025-09-06 00:51:12.241001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 00:51:12.241007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.241036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 00:51:12.241074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.241139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.241149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241160 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.241165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.241298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 00:51:12.241308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241313 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.241319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-06 00:51:12.241353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-06 00:51:12.241408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-06 00:51:12.241422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-06 00:51:12.241428 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.241434 | orchestrator | 2025-09-06 00:51:12.241439 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-06 00:51:12.241444 | orchestrator | Saturday 06 September 2025 00:48:29 +0000 (0:00:01.316) 0:03:34.891 **** 2025-09-06 00:51:12.241454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241465 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.241473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241484 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.241490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-06 00:51:12.241501 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.241506 | orchestrator | 2025-09-06 00:51:12.241511 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-06 00:51:12.241517 | orchestrator | Saturday 06 September 2025 00:48:31 +0000 (0:00:01.631) 0:03:36.523 **** 2025-09-06 00:51:12.241522 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.241527 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.241533 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.241538 | orchestrator | 2025-09-06 00:51:12.241543 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-06 00:51:12.241548 | orchestrator | Saturday 06 September 2025 00:48:32 +0000 (0:00:01.317) 0:03:37.840 **** 2025-09-06 00:51:12.241554 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.241559 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.241564 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.241570 | orchestrator | 2025-09-06 00:51:12.241575 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-06 00:51:12.241580 | orchestrator | Saturday 06 September 2025 00:48:34 +0000 (0:00:02.046) 0:03:39.887 **** 2025-09-06 00:51:12.241586 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.241591 | orchestrator | 2025-09-06 00:51:12.241596 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-06 00:51:12.241601 | orchestrator | Saturday 06 September 2025 00:48:36 +0000 (0:00:01.163) 0:03:41.050 **** 2025-09-06 00:51:12.241607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241637 | orchestrator | 2025-09-06 00:51:12.241642 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-06 00:51:12.241648 | orchestrator | Saturday 06 September 2025 00:48:40 +0000 (0:00:04.116) 0:03:45.167 **** 2025-09-06 00:51:12.241654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.241659 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.241665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.241670 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.241679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.241688 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.241694 | orchestrator | 2025-09-06 00:51:12.241699 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-06 00:51:12.241705 | orchestrator | Saturday 06 September 2025 00:48:40 +0000 (0:00:00.569) 0:03:45.736 **** 2025-09-06 00:51:12.241710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241722 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.241730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241741 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.241763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-06 00:51:12.241774 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.241780 | orchestrator | 2025-09-06 00:51:12.241785 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-06 00:51:12.241790 | orchestrator | Saturday 06 September 2025 00:48:41 +0000 (0:00:00.747) 0:03:46.483 **** 2025-09-06 00:51:12.241796 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.241801 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.241806 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.241812 | orchestrator | 2025-09-06 00:51:12.241817 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-06 00:51:12.241822 | orchestrator | Saturday 06 September 2025 00:48:42 +0000 (0:00:01.333) 0:03:47.816 **** 2025-09-06 00:51:12.241827 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.241833 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.241838 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.241843 | orchestrator | 2025-09-06 00:51:12.241849 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-06 00:51:12.241854 | orchestrator | Saturday 06 September 2025 00:48:44 +0000 (0:00:02.076) 0:03:49.893 **** 2025-09-06 00:51:12.241860 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.241869 | orchestrator | 2025-09-06 00:51:12.241876 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-06 00:51:12.241882 | orchestrator | Saturday 06 September 2025 00:48:46 +0000 (0:00:01.527) 0:03:51.421 **** 2025-09-06 00:51:12.241889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.241972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.241985 | orchestrator | 2025-09-06 00:51:12.241991 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-06 00:51:12.241997 | orchestrator | Saturday 06 September 2025 00:48:50 +0000 (0:00:04.245) 0:03:55.666 **** 2025-09-06 00:51:12.242004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.242100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242120 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.242138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242156 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.242173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.242194 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242200 | orchestrator | 2025-09-06 00:51:12.242207 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-06 00:51:12.242213 | orchestrator | Saturday 06 September 2025 00:48:51 +0000 (0:00:01.240) 0:03:56.906 **** 2025-09-06 00:51:12.242219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242247 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242275 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-06 00:51:12.242305 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242311 | orchestrator | 2025-09-06 00:51:12.242316 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-06 00:51:12.242322 | orchestrator | Saturday 06 September 2025 00:48:52 +0000 (0:00:00.894) 0:03:57.800 **** 2025-09-06 00:51:12.242327 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.242332 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.242338 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.242343 | orchestrator | 2025-09-06 00:51:12.242348 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-06 00:51:12.242354 | orchestrator | Saturday 06 September 2025 00:48:54 +0000 (0:00:01.289) 0:03:59.090 **** 2025-09-06 00:51:12.242359 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.242364 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.242370 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.242375 | orchestrator | 2025-09-06 00:51:12.242380 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-06 00:51:12.242385 | orchestrator | Saturday 06 September 2025 00:48:56 +0000 (0:00:02.017) 0:04:01.107 **** 2025-09-06 00:51:12.242391 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.242400 | orchestrator | 2025-09-06 00:51:12.242405 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-06 00:51:12.242414 | orchestrator | Saturday 06 September 2025 00:48:57 +0000 (0:00:01.658) 0:04:02.766 **** 2025-09-06 00:51:12.242419 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-06 00:51:12.242425 | orchestrator | 2025-09-06 00:51:12.242430 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-06 00:51:12.242436 | orchestrator | Saturday 06 September 2025 00:48:58 +0000 (0:00:00.839) 0:04:03.605 **** 2025-09-06 00:51:12.242441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-06 00:51:12.242447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-06 00:51:12.242453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-06 00:51:12.242458 | orchestrator | 2025-09-06 00:51:12.242464 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-06 00:51:12.242469 | orchestrator | Saturday 06 September 2025 00:49:03 +0000 (0:00:04.629) 0:04:08.235 **** 2025-09-06 00:51:12.242475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242480 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242494 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242509 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242515 | orchestrator | 2025-09-06 00:51:12.242520 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-06 00:51:12.242526 | orchestrator | Saturday 06 September 2025 00:49:04 +0000 (0:00:01.156) 0:04:09.392 **** 2025-09-06 00:51:12.242533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242546 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242562 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-06 00:51:12.242579 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242584 | orchestrator | 2025-09-06 00:51:12.242590 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-06 00:51:12.242595 | orchestrator | Saturday 06 September 2025 00:49:05 +0000 (0:00:01.574) 0:04:10.966 **** 2025-09-06 00:51:12.242600 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.242606 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.242611 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.242616 | orchestrator | 2025-09-06 00:51:12.242622 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-06 00:51:12.242627 | orchestrator | Saturday 06 September 2025 00:49:08 +0000 (0:00:02.392) 0:04:13.359 **** 2025-09-06 00:51:12.242632 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.242637 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.242643 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.242648 | orchestrator | 2025-09-06 00:51:12.242653 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-06 00:51:12.242659 | orchestrator | Saturday 06 September 2025 00:49:11 +0000 (0:00:02.845) 0:04:16.205 **** 2025-09-06 00:51:12.242664 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-06 00:51:12.242670 | orchestrator | 2025-09-06 00:51:12.242675 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-06 00:51:12.242680 | orchestrator | Saturday 06 September 2025 00:49:12 +0000 (0:00:01.386) 0:04:17.592 **** 2025-09-06 00:51:12.242686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242695 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242709 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242723 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242729 | orchestrator | 2025-09-06 00:51:12.242734 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-06 00:51:12.242740 | orchestrator | Saturday 06 September 2025 00:49:13 +0000 (0:00:01.232) 0:04:18.824 **** 2025-09-06 00:51:12.242745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242751 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242762 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-06 00:51:12.242773 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242778 | orchestrator | 2025-09-06 00:51:12.242783 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-06 00:51:12.242792 | orchestrator | Saturday 06 September 2025 00:49:15 +0000 (0:00:01.325) 0:04:20.150 **** 2025-09-06 00:51:12.242798 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242803 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242808 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242814 | orchestrator | 2025-09-06 00:51:12.242819 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-06 00:51:12.242824 | orchestrator | Saturday 06 September 2025 00:49:16 +0000 (0:00:01.796) 0:04:21.946 **** 2025-09-06 00:51:12.242830 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.242835 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.242841 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.242846 | orchestrator | 2025-09-06 00:51:12.242851 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-06 00:51:12.242857 | orchestrator | Saturday 06 September 2025 00:49:19 +0000 (0:00:02.355) 0:04:24.303 **** 2025-09-06 00:51:12.242862 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.242867 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.242873 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.242878 | orchestrator | 2025-09-06 00:51:12.242883 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-06 00:51:12.242889 | orchestrator | Saturday 06 September 2025 00:49:22 +0000 (0:00:03.004) 0:04:27.307 **** 2025-09-06 00:51:12.242894 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-06 00:51:12.242900 | orchestrator | 2025-09-06 00:51:12.242910 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-06 00:51:12.242915 | orchestrator | Saturday 06 September 2025 00:49:23 +0000 (0:00:00.822) 0:04:28.130 **** 2025-09-06 00:51:12.242921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.242965 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.242975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.242981 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.242986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.242992 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.242997 | orchestrator | 2025-09-06 00:51:12.243003 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-06 00:51:12.243008 | orchestrator | Saturday 06 September 2025 00:49:24 +0000 (0:00:01.337) 0:04:29.468 **** 2025-09-06 00:51:12.243018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.243023 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.243035 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-06 00:51:12.243046 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243051 | orchestrator | 2025-09-06 00:51:12.243057 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-06 00:51:12.243062 | orchestrator | Saturday 06 September 2025 00:49:25 +0000 (0:00:01.350) 0:04:30.818 **** 2025-09-06 00:51:12.243068 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243073 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243078 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243084 | orchestrator | 2025-09-06 00:51:12.243092 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-06 00:51:12.243098 | orchestrator | Saturday 06 September 2025 00:49:27 +0000 (0:00:01.563) 0:04:32.382 **** 2025-09-06 00:51:12.243103 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.243109 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.243114 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.243120 | orchestrator | 2025-09-06 00:51:12.243125 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-06 00:51:12.243131 | orchestrator | Saturday 06 September 2025 00:49:29 +0000 (0:00:02.270) 0:04:34.653 **** 2025-09-06 00:51:12.243136 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.243141 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.243147 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.243152 | orchestrator | 2025-09-06 00:51:12.243158 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-06 00:51:12.243163 | orchestrator | Saturday 06 September 2025 00:49:32 +0000 (0:00:03.299) 0:04:37.952 **** 2025-09-06 00:51:12.243168 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.243174 | orchestrator | 2025-09-06 00:51:12.243179 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-06 00:51:12.243185 | orchestrator | Saturday 06 September 2025 00:49:34 +0000 (0:00:01.670) 0:04:39.623 **** 2025-09-06 00:51:12.243194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.243204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.243240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.243250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243373 | orchestrator | 2025-09-06 00:51:12.243378 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-06 00:51:12.243384 | orchestrator | Saturday 06 September 2025 00:49:37 +0000 (0:00:03.366) 0:04:42.989 **** 2025-09-06 00:51:12.243390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.243395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243430 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.243441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243474 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.243486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 00:51:12.243491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 00:51:12.243504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 00:51:12.243512 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243517 | orchestrator | 2025-09-06 00:51:12.243522 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-06 00:51:12.243527 | orchestrator | Saturday 06 September 2025 00:49:38 +0000 (0:00:00.735) 0:04:43.725 **** 2025-09-06 00:51:12.243532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243542 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243560 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-06 00:51:12.243574 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243579 | orchestrator | 2025-09-06 00:51:12.243584 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-06 00:51:12.243588 | orchestrator | Saturday 06 September 2025 00:49:40 +0000 (0:00:01.610) 0:04:45.335 **** 2025-09-06 00:51:12.243593 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.243598 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.243603 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.243607 | orchestrator | 2025-09-06 00:51:12.243612 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-06 00:51:12.243617 | orchestrator | Saturday 06 September 2025 00:49:41 +0000 (0:00:01.454) 0:04:46.790 **** 2025-09-06 00:51:12.243622 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.243626 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.243631 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.243636 | orchestrator | 2025-09-06 00:51:12.243641 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-06 00:51:12.243646 | orchestrator | Saturday 06 September 2025 00:49:43 +0000 (0:00:02.029) 0:04:48.819 **** 2025-09-06 00:51:12.243650 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.243655 | orchestrator | 2025-09-06 00:51:12.243660 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-06 00:51:12.243664 | orchestrator | Saturday 06 September 2025 00:49:45 +0000 (0:00:01.333) 0:04:50.153 **** 2025-09-06 00:51:12.243669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:51:12.243683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:51:12.243692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:51:12.243698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:51:12.243703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:51:12.243716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:51:12.243721 | orchestrator | 2025-09-06 00:51:12.243726 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-06 00:51:12.243731 | orchestrator | Saturday 06 September 2025 00:49:50 +0000 (0:00:05.376) 0:04:55.530 **** 2025-09-06 00:51:12.243739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:51:12.243745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:51:12.243750 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:51:12.243766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:51:12.243772 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:51:12.243785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:51:12.243790 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243795 | orchestrator | 2025-09-06 00:51:12.243800 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-06 00:51:12.243805 | orchestrator | Saturday 06 September 2025 00:49:51 +0000 (0:00:00.682) 0:04:56.212 **** 2025-09-06 00:51:12.243810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-06 00:51:12.243815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243830 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-06 00:51:12.243839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243849 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-06 00:51:12.243862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-06 00:51:12.243872 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243876 | orchestrator | 2025-09-06 00:51:12.243881 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-06 00:51:12.243886 | orchestrator | Saturday 06 September 2025 00:49:52 +0000 (0:00:00.958) 0:04:57.171 **** 2025-09-06 00:51:12.243891 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243895 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243900 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243905 | orchestrator | 2025-09-06 00:51:12.243910 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-06 00:51:12.243914 | orchestrator | Saturday 06 September 2025 00:49:53 +0000 (0:00:00.880) 0:04:58.051 **** 2025-09-06 00:51:12.243919 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.243934 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.243940 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.243944 | orchestrator | 2025-09-06 00:51:12.243953 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-06 00:51:12.243958 | orchestrator | Saturday 06 September 2025 00:49:54 +0000 (0:00:01.380) 0:04:59.431 **** 2025-09-06 00:51:12.243963 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.243967 | orchestrator | 2025-09-06 00:51:12.243972 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-06 00:51:12.243977 | orchestrator | Saturday 06 September 2025 00:49:55 +0000 (0:00:01.420) 0:05:00.851 **** 2025-09-06 00:51:12.243982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 00:51:12.243990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 00:51:12.243996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 00:51:12.244017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 00:51:12.244125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 00:51:12.244161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 00:51:12.244174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244216 | orchestrator | 2025-09-06 00:51:12.244221 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-06 00:51:12.244226 | orchestrator | Saturday 06 September 2025 00:50:00 +0000 (0:00:04.488) 0:05:05.340 **** 2025-09-06 00:51:12.244233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-06 00:51:12.244238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-06 00:51:12.244270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-06 00:51:12.244294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-06 00:51:12.244324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-06 00:51:12.244356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244364 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 00:51:12.244380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244393 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-06 00:51:12.244416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-06 00:51:12.244426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 00:51:12.244439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 00:51:12.244444 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244449 | orchestrator | 2025-09-06 00:51:12.244454 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-06 00:51:12.244458 | orchestrator | Saturday 06 September 2025 00:50:01 +0000 (0:00:01.218) 0:05:06.558 **** 2025-09-06 00:51:12.244463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244484 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-06 00:51:12.244519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244524 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-06 00:51:12.244541 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244546 | orchestrator | 2025-09-06 00:51:12.244551 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-06 00:51:12.244556 | orchestrator | Saturday 06 September 2025 00:50:02 +0000 (0:00:00.990) 0:05:07.548 **** 2025-09-06 00:51:12.244561 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244565 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244570 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244575 | orchestrator | 2025-09-06 00:51:12.244580 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-06 00:51:12.244584 | orchestrator | Saturday 06 September 2025 00:50:02 +0000 (0:00:00.446) 0:05:07.995 **** 2025-09-06 00:51:12.244589 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244594 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244599 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244603 | orchestrator | 2025-09-06 00:51:12.244608 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-06 00:51:12.244613 | orchestrator | Saturday 06 September 2025 00:50:04 +0000 (0:00:01.559) 0:05:09.554 **** 2025-09-06 00:51:12.244617 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.244622 | orchestrator | 2025-09-06 00:51:12.244627 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-06 00:51:12.244632 | orchestrator | Saturday 06 September 2025 00:50:06 +0000 (0:00:01.583) 0:05:11.137 **** 2025-09-06 00:51:12.244637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:51:12.244646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:51:12.244654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-06 00:51:12.244659 | orchestrator | 2025-09-06 00:51:12.244666 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-06 00:51:12.244671 | orchestrator | Saturday 06 September 2025 00:50:08 +0000 (0:00:02.416) 0:05:13.554 **** 2025-09-06 00:51:12.244676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-06 00:51:12.244682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-06 00:51:12.244690 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244695 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-06 00:51:12.244708 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244713 | orchestrator | 2025-09-06 00:51:12.244718 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-06 00:51:12.244723 | orchestrator | Saturday 06 September 2025 00:50:08 +0000 (0:00:00.344) 0:05:13.898 **** 2025-09-06 00:51:12.244728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-06 00:51:12.244733 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-06 00:51:12.244742 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-06 00:51:12.244752 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244757 | orchestrator | 2025-09-06 00:51:12.244761 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-06 00:51:12.244766 | orchestrator | Saturday 06 September 2025 00:50:09 +0000 (0:00:00.824) 0:05:14.722 **** 2025-09-06 00:51:12.244773 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244778 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244783 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244788 | orchestrator | 2025-09-06 00:51:12.244793 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-06 00:51:12.244797 | orchestrator | Saturday 06 September 2025 00:50:10 +0000 (0:00:00.410) 0:05:15.134 **** 2025-09-06 00:51:12.244802 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244807 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244811 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244816 | orchestrator | 2025-09-06 00:51:12.244821 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-06 00:51:12.244826 | orchestrator | Saturday 06 September 2025 00:50:11 +0000 (0:00:01.136) 0:05:16.270 **** 2025-09-06 00:51:12.244830 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:51:12.244835 | orchestrator | 2025-09-06 00:51:12.244840 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-06 00:51:12.244844 | orchestrator | Saturday 06 September 2025 00:50:12 +0000 (0:00:01.557) 0:05:17.828 **** 2025-09-06 00:51:12.244853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-06 00:51:12.244893 | orchestrator | 2025-09-06 00:51:12.244898 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-06 00:51:12.244903 | orchestrator | Saturday 06 September 2025 00:50:18 +0000 (0:00:05.671) 0:05:23.500 **** 2025-09-06 00:51:12.244912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244937 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.244942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244956 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.244961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-06 00:51:12.244974 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.244979 | orchestrator | 2025-09-06 00:51:12.244984 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-06 00:51:12.244991 | orchestrator | Saturday 06 September 2025 00:50:19 +0000 (0:00:00.630) 0:05:24.130 **** 2025-09-06 00:51:12.244996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245019 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245043 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-06 00:51:12.245067 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245072 | orchestrator | 2025-09-06 00:51:12.245077 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-06 00:51:12.245082 | orchestrator | Saturday 06 September 2025 00:50:20 +0000 (0:00:01.701) 0:05:25.831 **** 2025-09-06 00:51:12.245086 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.245091 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.245098 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.245103 | orchestrator | 2025-09-06 00:51:12.245108 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-06 00:51:12.245112 | orchestrator | Saturday 06 September 2025 00:50:22 +0000 (0:00:01.340) 0:05:27.172 **** 2025-09-06 00:51:12.245117 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.245122 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.245127 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.245131 | orchestrator | 2025-09-06 00:51:12.245136 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-06 00:51:12.245144 | orchestrator | Saturday 06 September 2025 00:50:24 +0000 (0:00:02.144) 0:05:29.316 **** 2025-09-06 00:51:12.245149 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245154 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245158 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245163 | orchestrator | 2025-09-06 00:51:12.245168 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-06 00:51:12.245173 | orchestrator | Saturday 06 September 2025 00:50:24 +0000 (0:00:00.348) 0:05:29.665 **** 2025-09-06 00:51:12.245177 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245182 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245187 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245191 | orchestrator | 2025-09-06 00:51:12.245196 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-06 00:51:12.245203 | orchestrator | Saturday 06 September 2025 00:50:24 +0000 (0:00:00.334) 0:05:29.999 **** 2025-09-06 00:51:12.245208 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245213 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245218 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245223 | orchestrator | 2025-09-06 00:51:12.245227 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-06 00:51:12.245232 | orchestrator | Saturday 06 September 2025 00:50:25 +0000 (0:00:00.739) 0:05:30.739 **** 2025-09-06 00:51:12.245237 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245242 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245246 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245251 | orchestrator | 2025-09-06 00:51:12.245256 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-06 00:51:12.245261 | orchestrator | Saturday 06 September 2025 00:50:26 +0000 (0:00:00.346) 0:05:31.086 **** 2025-09-06 00:51:12.245265 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245270 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245275 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245279 | orchestrator | 2025-09-06 00:51:12.245284 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-06 00:51:12.245289 | orchestrator | Saturday 06 September 2025 00:50:26 +0000 (0:00:00.317) 0:05:31.403 **** 2025-09-06 00:51:12.245294 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245298 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245303 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245308 | orchestrator | 2025-09-06 00:51:12.245312 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-06 00:51:12.245317 | orchestrator | Saturday 06 September 2025 00:50:27 +0000 (0:00:00.915) 0:05:32.319 **** 2025-09-06 00:51:12.245322 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245327 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245331 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245336 | orchestrator | 2025-09-06 00:51:12.245341 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-06 00:51:12.245346 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.746) 0:05:33.065 **** 2025-09-06 00:51:12.245350 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245355 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245360 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245364 | orchestrator | 2025-09-06 00:51:12.245369 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-06 00:51:12.245374 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.352) 0:05:33.418 **** 2025-09-06 00:51:12.245379 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245383 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245388 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245393 | orchestrator | 2025-09-06 00:51:12.245398 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-06 00:51:12.245402 | orchestrator | Saturday 06 September 2025 00:50:29 +0000 (0:00:00.918) 0:05:34.336 **** 2025-09-06 00:51:12.245410 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245415 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245420 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245425 | orchestrator | 2025-09-06 00:51:12.245429 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-06 00:51:12.245434 | orchestrator | Saturday 06 September 2025 00:50:30 +0000 (0:00:01.199) 0:05:35.535 **** 2025-09-06 00:51:12.245439 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245444 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245448 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245453 | orchestrator | 2025-09-06 00:51:12.245458 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-06 00:51:12.245462 | orchestrator | Saturday 06 September 2025 00:50:31 +0000 (0:00:00.888) 0:05:36.424 **** 2025-09-06 00:51:12.245467 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.245472 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.245477 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.245481 | orchestrator | 2025-09-06 00:51:12.245486 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-06 00:51:12.245491 | orchestrator | Saturday 06 September 2025 00:50:41 +0000 (0:00:09.902) 0:05:46.326 **** 2025-09-06 00:51:12.245496 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245500 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245505 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245510 | orchestrator | 2025-09-06 00:51:12.245514 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-06 00:51:12.245519 | orchestrator | Saturday 06 September 2025 00:50:42 +0000 (0:00:00.742) 0:05:47.069 **** 2025-09-06 00:51:12.245524 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.245529 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.245533 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.245538 | orchestrator | 2025-09-06 00:51:12.245546 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-06 00:51:12.245551 | orchestrator | Saturday 06 September 2025 00:50:55 +0000 (0:00:13.667) 0:06:00.737 **** 2025-09-06 00:51:12.245556 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245561 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245565 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245570 | orchestrator | 2025-09-06 00:51:12.245575 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-06 00:51:12.245579 | orchestrator | Saturday 06 September 2025 00:50:56 +0000 (0:00:01.162) 0:06:01.900 **** 2025-09-06 00:51:12.245584 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:51:12.245589 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:51:12.245594 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:51:12.245598 | orchestrator | 2025-09-06 00:51:12.245603 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-06 00:51:12.245608 | orchestrator | Saturday 06 September 2025 00:51:05 +0000 (0:00:08.440) 0:06:10.340 **** 2025-09-06 00:51:12.245613 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245617 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245622 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245627 | orchestrator | 2025-09-06 00:51:12.245631 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-06 00:51:12.245636 | orchestrator | Saturday 06 September 2025 00:51:05 +0000 (0:00:00.358) 0:06:10.698 **** 2025-09-06 00:51:12.245641 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245648 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245653 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245658 | orchestrator | 2025-09-06 00:51:12.245663 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-06 00:51:12.245667 | orchestrator | Saturday 06 September 2025 00:51:06 +0000 (0:00:00.354) 0:06:11.053 **** 2025-09-06 00:51:12.245672 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245681 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245686 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245690 | orchestrator | 2025-09-06 00:51:12.245695 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-06 00:51:12.245700 | orchestrator | Saturday 06 September 2025 00:51:06 +0000 (0:00:00.780) 0:06:11.833 **** 2025-09-06 00:51:12.245705 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245709 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245714 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245719 | orchestrator | 2025-09-06 00:51:12.245724 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-06 00:51:12.245728 | orchestrator | Saturday 06 September 2025 00:51:07 +0000 (0:00:00.337) 0:06:12.171 **** 2025-09-06 00:51:12.245733 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245738 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245742 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245747 | orchestrator | 2025-09-06 00:51:12.245752 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-06 00:51:12.245757 | orchestrator | Saturday 06 September 2025 00:51:07 +0000 (0:00:00.417) 0:06:12.588 **** 2025-09-06 00:51:12.245761 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:51:12.245766 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:51:12.245771 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:51:12.245776 | orchestrator | 2025-09-06 00:51:12.245780 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-06 00:51:12.245785 | orchestrator | Saturday 06 September 2025 00:51:07 +0000 (0:00:00.345) 0:06:12.933 **** 2025-09-06 00:51:12.245790 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245795 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245799 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245804 | orchestrator | 2025-09-06 00:51:12.245809 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-06 00:51:12.245814 | orchestrator | Saturday 06 September 2025 00:51:09 +0000 (0:00:01.362) 0:06:14.295 **** 2025-09-06 00:51:12.245818 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:51:12.245823 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:51:12.245828 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:51:12.245832 | orchestrator | 2025-09-06 00:51:12.245837 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:51:12.245842 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-06 00:51:12.245847 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-06 00:51:12.245852 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-06 00:51:12.245857 | orchestrator | 2025-09-06 00:51:12.245862 | orchestrator | 2025-09-06 00:51:12.245866 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:51:12.245871 | orchestrator | Saturday 06 September 2025 00:51:10 +0000 (0:00:00.890) 0:06:15.186 **** 2025-09-06 00:51:12.245876 | orchestrator | =============================================================================== 2025-09-06 00:51:12.245880 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.67s 2025-09-06 00:51:12.245885 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.90s 2025-09-06 00:51:12.245890 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.44s 2025-09-06 00:51:12.245895 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.55s 2025-09-06 00:51:12.245899 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.67s 2025-09-06 00:51:12.245904 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.38s 2025-09-06 00:51:12.245912 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.63s 2025-09-06 00:51:12.245920 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.59s 2025-09-06 00:51:12.245935 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.54s 2025-09-06 00:51:12.245940 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.49s 2025-09-06 00:51:12.245944 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.39s 2025-09-06 00:51:12.245949 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.27s 2025-09-06 00:51:12.245954 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.25s 2025-09-06 00:51:12.245958 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.14s 2025-09-06 00:51:12.245963 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.12s 2025-09-06 00:51:12.245968 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.12s 2025-09-06 00:51:12.245972 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.92s 2025-09-06 00:51:12.245977 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.91s 2025-09-06 00:51:12.245982 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.60s 2025-09-06 00:51:12.245987 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.56s 2025-09-06 00:51:15.255283 | orchestrator | 2025-09-06 00:51:15 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:15.256319 | orchestrator | 2025-09-06 00:51:15 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:15.258170 | orchestrator | 2025-09-06 00:51:15 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:15.258194 | orchestrator | 2025-09-06 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:18.298990 | orchestrator | 2025-09-06 00:51:18 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:18.303124 | orchestrator | 2025-09-06 00:51:18 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:18.303513 | orchestrator | 2025-09-06 00:51:18 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:18.303537 | orchestrator | 2025-09-06 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:21.341277 | orchestrator | 2025-09-06 00:51:21 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:21.341947 | orchestrator | 2025-09-06 00:51:21 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:21.342626 | orchestrator | 2025-09-06 00:51:21 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:21.342648 | orchestrator | 2025-09-06 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:24.372021 | orchestrator | 2025-09-06 00:51:24 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:24.373024 | orchestrator | 2025-09-06 00:51:24 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:24.375513 | orchestrator | 2025-09-06 00:51:24 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:24.375538 | orchestrator | 2025-09-06 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:27.412769 | orchestrator | 2025-09-06 00:51:27 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:27.413581 | orchestrator | 2025-09-06 00:51:27 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:27.415181 | orchestrator | 2025-09-06 00:51:27 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:27.415322 | orchestrator | 2025-09-06 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:30.445589 | orchestrator | 2025-09-06 00:51:30 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:30.446118 | orchestrator | 2025-09-06 00:51:30 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:30.446835 | orchestrator | 2025-09-06 00:51:30 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:30.447023 | orchestrator | 2025-09-06 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:33.490110 | orchestrator | 2025-09-06 00:51:33 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:33.490395 | orchestrator | 2025-09-06 00:51:33 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:33.491227 | orchestrator | 2025-09-06 00:51:33 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:33.491267 | orchestrator | 2025-09-06 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:36.526007 | orchestrator | 2025-09-06 00:51:36 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:36.531341 | orchestrator | 2025-09-06 00:51:36 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:36.535000 | orchestrator | 2025-09-06 00:51:36 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:36.535674 | orchestrator | 2025-09-06 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:39.585786 | orchestrator | 2025-09-06 00:51:39 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:39.589455 | orchestrator | 2025-09-06 00:51:39 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:39.592247 | orchestrator | 2025-09-06 00:51:39 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:39.592306 | orchestrator | 2025-09-06 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:42.630267 | orchestrator | 2025-09-06 00:51:42 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:42.630853 | orchestrator | 2025-09-06 00:51:42 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:42.631603 | orchestrator | 2025-09-06 00:51:42 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:42.631629 | orchestrator | 2025-09-06 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:45.675209 | orchestrator | 2025-09-06 00:51:45 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:45.675314 | orchestrator | 2025-09-06 00:51:45 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:45.675698 | orchestrator | 2025-09-06 00:51:45 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:45.676186 | orchestrator | 2025-09-06 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:48.715596 | orchestrator | 2025-09-06 00:51:48 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:48.716924 | orchestrator | 2025-09-06 00:51:48 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:48.718501 | orchestrator | 2025-09-06 00:51:48 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:48.718770 | orchestrator | 2025-09-06 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:51.765016 | orchestrator | 2025-09-06 00:51:51 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:51.766792 | orchestrator | 2025-09-06 00:51:51 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:51.769760 | orchestrator | 2025-09-06 00:51:51 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:51.770211 | orchestrator | 2025-09-06 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:54.812748 | orchestrator | 2025-09-06 00:51:54 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:54.815184 | orchestrator | 2025-09-06 00:51:54 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:54.816776 | orchestrator | 2025-09-06 00:51:54 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:54.816796 | orchestrator | 2025-09-06 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:51:57.860784 | orchestrator | 2025-09-06 00:51:57 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:51:57.862066 | orchestrator | 2025-09-06 00:51:57 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:51:57.862099 | orchestrator | 2025-09-06 00:51:57 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:51:57.862111 | orchestrator | 2025-09-06 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:00.899057 | orchestrator | 2025-09-06 00:52:00 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:00.899514 | orchestrator | 2025-09-06 00:52:00 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:00.900538 | orchestrator | 2025-09-06 00:52:00 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:00.900580 | orchestrator | 2025-09-06 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:03.954633 | orchestrator | 2025-09-06 00:52:03 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:03.956368 | orchestrator | 2025-09-06 00:52:03 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:03.958865 | orchestrator | 2025-09-06 00:52:03 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:03.959353 | orchestrator | 2025-09-06 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:07.005025 | orchestrator | 2025-09-06 00:52:07 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:07.007055 | orchestrator | 2025-09-06 00:52:07 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:07.009055 | orchestrator | 2025-09-06 00:52:07 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:07.009081 | orchestrator | 2025-09-06 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:10.045413 | orchestrator | 2025-09-06 00:52:10 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:10.046117 | orchestrator | 2025-09-06 00:52:10 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:10.047114 | orchestrator | 2025-09-06 00:52:10 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:10.047360 | orchestrator | 2025-09-06 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:13.092711 | orchestrator | 2025-09-06 00:52:13 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:13.096407 | orchestrator | 2025-09-06 00:52:13 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:13.099042 | orchestrator | 2025-09-06 00:52:13 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:13.099067 | orchestrator | 2025-09-06 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:16.147489 | orchestrator | 2025-09-06 00:52:16 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:16.149565 | orchestrator | 2025-09-06 00:52:16 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:16.150994 | orchestrator | 2025-09-06 00:52:16 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:16.151241 | orchestrator | 2025-09-06 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:19.202698 | orchestrator | 2025-09-06 00:52:19 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:19.204418 | orchestrator | 2025-09-06 00:52:19 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:19.205981 | orchestrator | 2025-09-06 00:52:19 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:19.206322 | orchestrator | 2025-09-06 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:22.250176 | orchestrator | 2025-09-06 00:52:22 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:22.251657 | orchestrator | 2025-09-06 00:52:22 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:22.254877 | orchestrator | 2025-09-06 00:52:22 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:22.254921 | orchestrator | 2025-09-06 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:25.304552 | orchestrator | 2025-09-06 00:52:25 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:25.305946 | orchestrator | 2025-09-06 00:52:25 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:25.307150 | orchestrator | 2025-09-06 00:52:25 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:25.307646 | orchestrator | 2025-09-06 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:28.367649 | orchestrator | 2025-09-06 00:52:28 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:28.369073 | orchestrator | 2025-09-06 00:52:28 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:28.371850 | orchestrator | 2025-09-06 00:52:28 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:28.371885 | orchestrator | 2025-09-06 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:31.431722 | orchestrator | 2025-09-06 00:52:31 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:31.433069 | orchestrator | 2025-09-06 00:52:31 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:31.435040 | orchestrator | 2025-09-06 00:52:31 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:31.435434 | orchestrator | 2025-09-06 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:34.485036 | orchestrator | 2025-09-06 00:52:34 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:34.487054 | orchestrator | 2025-09-06 00:52:34 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:34.490334 | orchestrator | 2025-09-06 00:52:34 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:34.490735 | orchestrator | 2025-09-06 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:37.539322 | orchestrator | 2025-09-06 00:52:37 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:37.540536 | orchestrator | 2025-09-06 00:52:37 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:37.543141 | orchestrator | 2025-09-06 00:52:37 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:37.543240 | orchestrator | 2025-09-06 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:40.582929 | orchestrator | 2025-09-06 00:52:40 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:40.583175 | orchestrator | 2025-09-06 00:52:40 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:40.584481 | orchestrator | 2025-09-06 00:52:40 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:40.584498 | orchestrator | 2025-09-06 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:43.632013 | orchestrator | 2025-09-06 00:52:43 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:43.632192 | orchestrator | 2025-09-06 00:52:43 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:43.634622 | orchestrator | 2025-09-06 00:52:43 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:43.634651 | orchestrator | 2025-09-06 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:46.687534 | orchestrator | 2025-09-06 00:52:46 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:46.687646 | orchestrator | 2025-09-06 00:52:46 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:46.689180 | orchestrator | 2025-09-06 00:52:46 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:46.689204 | orchestrator | 2025-09-06 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:49.737971 | orchestrator | 2025-09-06 00:52:49 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:49.739480 | orchestrator | 2025-09-06 00:52:49 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:49.741647 | orchestrator | 2025-09-06 00:52:49 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:49.741980 | orchestrator | 2025-09-06 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:52.788714 | orchestrator | 2025-09-06 00:52:52 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:52.790709 | orchestrator | 2025-09-06 00:52:52 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:52.793283 | orchestrator | 2025-09-06 00:52:52 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:52.793310 | orchestrator | 2025-09-06 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:55.836885 | orchestrator | 2025-09-06 00:52:55 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:55.839303 | orchestrator | 2025-09-06 00:52:55 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:55.841438 | orchestrator | 2025-09-06 00:52:55 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:55.841471 | orchestrator | 2025-09-06 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:52:58.892321 | orchestrator | 2025-09-06 00:52:58 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:52:58.894129 | orchestrator | 2025-09-06 00:52:58 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:52:58.895990 | orchestrator | 2025-09-06 00:52:58 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:52:58.896019 | orchestrator | 2025-09-06 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:01.947185 | orchestrator | 2025-09-06 00:53:01 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:01.949895 | orchestrator | 2025-09-06 00:53:01 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:01.952227 | orchestrator | 2025-09-06 00:53:01 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:53:01.952254 | orchestrator | 2025-09-06 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:05.013573 | orchestrator | 2025-09-06 00:53:05 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:05.018356 | orchestrator | 2025-09-06 00:53:05 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:05.018402 | orchestrator | 2025-09-06 00:53:05 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:53:05.018415 | orchestrator | 2025-09-06 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:08.054999 | orchestrator | 2025-09-06 00:53:08 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:08.055223 | orchestrator | 2025-09-06 00:53:08 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:08.057857 | orchestrator | 2025-09-06 00:53:08 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:53:08.058317 | orchestrator | 2025-09-06 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:11.112040 | orchestrator | 2025-09-06 00:53:11 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:11.114094 | orchestrator | 2025-09-06 00:53:11 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:11.116348 | orchestrator | 2025-09-06 00:53:11 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state STARTED 2025-09-06 00:53:11.116901 | orchestrator | 2025-09-06 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:14.170247 | orchestrator | 2025-09-06 00:53:14 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:14.173255 | orchestrator | 2025-09-06 00:53:14 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:14.175731 | orchestrator | 2025-09-06 00:53:14 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:14.182198 | orchestrator | 2025-09-06 00:53:14 | INFO  | Task 69784897-053f-4c47-a2d1-589b5b14201e is in state SUCCESS 2025-09-06 00:53:14.185791 | orchestrator | 2025-09-06 00:53:14.185826 | orchestrator | 2025-09-06 00:53:14.185838 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-06 00:53:14.185875 | orchestrator | 2025-09-06 00:53:14.185886 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-06 00:53:14.185896 | orchestrator | Saturday 06 September 2025 00:42:11 +0000 (0:00:00.607) 0:00:00.607 **** 2025-09-06 00:53:14.185907 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.185918 | orchestrator | 2025-09-06 00:53:14.185928 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-06 00:53:14.185937 | orchestrator | Saturday 06 September 2025 00:42:12 +0000 (0:00:00.955) 0:00:01.563 **** 2025-09-06 00:53:14.185947 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.185958 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.185967 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.185977 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.185986 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.185996 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186005 | orchestrator | 2025-09-06 00:53:14.186014 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-06 00:53:14.186241 | orchestrator | Saturday 06 September 2025 00:42:13 +0000 (0:00:01.724) 0:00:03.287 **** 2025-09-06 00:53:14.186257 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.186272 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.186282 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.186291 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.186301 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.186310 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186320 | orchestrator | 2025-09-06 00:53:14.186329 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-06 00:53:14.186353 | orchestrator | Saturday 06 September 2025 00:42:14 +0000 (0:00:00.772) 0:00:04.060 **** 2025-09-06 00:53:14.186363 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.186374 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.186385 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.186397 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.186408 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.186418 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186429 | orchestrator | 2025-09-06 00:53:14.186439 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-06 00:53:14.186451 | orchestrator | Saturday 06 September 2025 00:42:15 +0000 (0:00:00.942) 0:00:05.003 **** 2025-09-06 00:53:14.186462 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.186474 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.186484 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.186508 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.186520 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.186530 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186566 | orchestrator | 2025-09-06 00:53:14.186577 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-06 00:53:14.186623 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:00.750) 0:00:05.753 **** 2025-09-06 00:53:14.186637 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.186648 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.186689 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.186702 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.186715 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.186731 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186887 | orchestrator | 2025-09-06 00:53:14.186907 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-06 00:53:14.186919 | orchestrator | Saturday 06 September 2025 00:42:16 +0000 (0:00:00.572) 0:00:06.326 **** 2025-09-06 00:53:14.186928 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.186938 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.186947 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.186957 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.186977 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.186986 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.186996 | orchestrator | 2025-09-06 00:53:14.187005 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-06 00:53:14.187015 | orchestrator | Saturday 06 September 2025 00:42:18 +0000 (0:00:01.102) 0:00:07.428 **** 2025-09-06 00:53:14.187025 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.187035 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.187045 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.187054 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.187063 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.187073 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.187082 | orchestrator | 2025-09-06 00:53:14.187091 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-06 00:53:14.187101 | orchestrator | Saturday 06 September 2025 00:42:19 +0000 (0:00:01.138) 0:00:08.567 **** 2025-09-06 00:53:14.187111 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.187120 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.187130 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.187180 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.187198 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.187214 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.187225 | orchestrator | 2025-09-06 00:53:14.187235 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-06 00:53:14.187244 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:00.869) 0:00:09.436 **** 2025-09-06 00:53:14.187254 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:53:14.187263 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.187273 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.187282 | orchestrator | 2025-09-06 00:53:14.187292 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-06 00:53:14.187302 | orchestrator | Saturday 06 September 2025 00:42:20 +0000 (0:00:00.698) 0:00:10.135 **** 2025-09-06 00:53:14.187311 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.187321 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.187330 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.187340 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.187349 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.187359 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.187368 | orchestrator | 2025-09-06 00:53:14.187392 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-06 00:53:14.187402 | orchestrator | Saturday 06 September 2025 00:42:22 +0000 (0:00:01.389) 0:00:11.524 **** 2025-09-06 00:53:14.187412 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:53:14.187421 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.187431 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.187441 | orchestrator | 2025-09-06 00:53:14.187450 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-06 00:53:14.187460 | orchestrator | Saturday 06 September 2025 00:42:25 +0000 (0:00:03.175) 0:00:14.700 **** 2025-09-06 00:53:14.187470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-06 00:53:14.187480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-06 00:53:14.187489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-06 00:53:14.187498 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.187508 | orchestrator | 2025-09-06 00:53:14.187517 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-06 00:53:14.187527 | orchestrator | Saturday 06 September 2025 00:42:25 +0000 (0:00:00.460) 0:00:15.160 **** 2025-09-06 00:53:14.187539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187618 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.187628 | orchestrator | 2025-09-06 00:53:14.187638 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-06 00:53:14.187647 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:00.704) 0:00:15.865 **** 2025-09-06 00:53:14.187659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187692 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.187702 | orchestrator | 2025-09-06 00:53:14.187712 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-06 00:53:14.187721 | orchestrator | Saturday 06 September 2025 00:42:26 +0000 (0:00:00.165) 0:00:16.030 **** 2025-09-06 00:53:14.187762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-06 00:42:22.914494', 'end': '2025-09-06 00:42:23.195912', 'delta': '0:00:00.281418', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-06 00:42:23.892979', 'end': '2025-09-06 00:42:24.141043', 'delta': '0:00:00.248064', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-06 00:42:24.629652', 'end': '2025-09-06 00:42:24.928202', 'delta': '0:00:00.298550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.187957 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.187971 | orchestrator | 2025-09-06 00:53:14.187981 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-06 00:53:14.187990 | orchestrator | Saturday 06 September 2025 00:42:27 +0000 (0:00:00.391) 0:00:16.422 **** 2025-09-06 00:53:14.188000 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.188010 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.188019 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.188028 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.188038 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.188047 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.188057 | orchestrator | 2025-09-06 00:53:14.188066 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-06 00:53:14.188076 | orchestrator | Saturday 06 September 2025 00:42:28 +0000 (0:00:01.731) 0:00:18.153 **** 2025-09-06 00:53:14.188085 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.188095 | orchestrator | 2025-09-06 00:53:14.188105 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-06 00:53:14.188114 | orchestrator | Saturday 06 September 2025 00:42:29 +0000 (0:00:00.730) 0:00:18.884 **** 2025-09-06 00:53:14.188124 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188134 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188143 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188153 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188162 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188171 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188181 | orchestrator | 2025-09-06 00:53:14.188190 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-06 00:53:14.188200 | orchestrator | Saturday 06 September 2025 00:42:31 +0000 (0:00:01.642) 0:00:20.526 **** 2025-09-06 00:53:14.188209 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188218 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188228 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188237 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188247 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188267 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188277 | orchestrator | 2025-09-06 00:53:14.188287 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-06 00:53:14.188296 | orchestrator | Saturday 06 September 2025 00:42:32 +0000 (0:00:01.432) 0:00:21.959 **** 2025-09-06 00:53:14.188306 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188315 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188325 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188334 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188343 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188353 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188362 | orchestrator | 2025-09-06 00:53:14.188371 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-06 00:53:14.188429 | orchestrator | Saturday 06 September 2025 00:42:33 +0000 (0:00:01.061) 0:00:23.020 **** 2025-09-06 00:53:14.188482 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188500 | orchestrator | 2025-09-06 00:53:14.188550 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-06 00:53:14.188580 | orchestrator | Saturday 06 September 2025 00:42:33 +0000 (0:00:00.192) 0:00:23.212 **** 2025-09-06 00:53:14.188590 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188600 | orchestrator | 2025-09-06 00:53:14.188609 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-06 00:53:14.188619 | orchestrator | Saturday 06 September 2025 00:42:34 +0000 (0:00:00.210) 0:00:23.423 **** 2025-09-06 00:53:14.188628 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188637 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188647 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188656 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188665 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188675 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188684 | orchestrator | 2025-09-06 00:53:14.188700 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-06 00:53:14.188710 | orchestrator | Saturday 06 September 2025 00:42:34 +0000 (0:00:00.543) 0:00:23.966 **** 2025-09-06 00:53:14.188720 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188729 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188806 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188818 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188828 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188838 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188847 | orchestrator | 2025-09-06 00:53:14.188857 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-06 00:53:14.188866 | orchestrator | Saturday 06 September 2025 00:42:35 +0000 (0:00:00.895) 0:00:24.862 **** 2025-09-06 00:53:14.188876 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188886 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188895 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188904 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188914 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.188923 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.188932 | orchestrator | 2025-09-06 00:53:14.188942 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-06 00:53:14.188952 | orchestrator | Saturday 06 September 2025 00:42:36 +0000 (0:00:00.876) 0:00:25.738 **** 2025-09-06 00:53:14.188961 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.188971 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.188980 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.188990 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.188999 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.189009 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.189018 | orchestrator | 2025-09-06 00:53:14.189034 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-06 00:53:14.189043 | orchestrator | Saturday 06 September 2025 00:42:36 +0000 (0:00:00.642) 0:00:26.381 **** 2025-09-06 00:53:14.189053 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.189062 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.189072 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.189081 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.189091 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.189100 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.189271 | orchestrator | 2025-09-06 00:53:14.189290 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-06 00:53:14.189305 | orchestrator | Saturday 06 September 2025 00:42:37 +0000 (0:00:00.729) 0:00:27.111 **** 2025-09-06 00:53:14.189315 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.189325 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.189342 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.189352 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.189361 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.189370 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.189380 | orchestrator | 2025-09-06 00:53:14.189389 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-06 00:53:14.189399 | orchestrator | Saturday 06 September 2025 00:42:38 +0000 (0:00:00.880) 0:00:27.991 **** 2025-09-06 00:53:14.189409 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.189418 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.189427 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.189437 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.189446 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.189455 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.189465 | orchestrator | 2025-09-06 00:53:14.189474 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-06 00:53:14.189484 | orchestrator | Saturday 06 September 2025 00:42:39 +0000 (0:00:00.474) 0:00:28.465 **** 2025-09-06 00:53:14.189495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567', 'dm-uuid-LVM-r6f80mz9e22Vmz3H2GU0Ef84wrC6l1Ff93I1fJ96d512su5aeJbRbgGkDCiB9O2q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3', 'dm-uuid-LVM-HjJ7GKB5yLddflqwhdAdEzWzRwWiY2ZFQVTzzIqNyOhGoOnap4BNgRvCK8MrZYxN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6', 'dm-uuid-LVM-cu4va0YeCfZXWXc5bD75hcGN10dQTwekTMF8ZROPB9Y9NfccWc0R6zJniHlIFj9E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.189668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0', 'dm-uuid-LVM-fycm5QQlGOho71zbS5RzdZZtfc1SZaX2Hr30eDfIJy9FbnEjzTcsZcaeVbsXeROx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tnpbur-SzDW-WQ8q-U5AF-PBL6-Su6o-vobvpB', 'scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8', 'scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.189692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iCop39-Riup-vFef-zeMs-SIWe-bIEY-BLh0Jz', 'scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff', 'scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.189830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5', 'scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.189871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.189881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189960 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.189969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.189998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f', 'dm-uuid-LVM-S1sgywPEkjpv9d0wsFPQU3cEbxfDfA6xyq1Srsvdb8p4ZPCF91EIdWz8Ul8NLFKG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.190008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709', 'dm-uuid-LVM-gsAMb0k6MRCpv6Q1MlP1kUCTMe8oIPXrC4bOdSsf658daatcLZ99by6ZGXXFUiqT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.190075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.190087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.190095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.190103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7DZGc-sTzo-CcSh-NBkn-EaEg-3VEw-R0yWte', 'scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74', 'scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qg95jI-e6Nq-hStd-2tYS-uIOn-9vf1-GjhhtD', 'scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba', 'scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7', 'scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MmVSvD-xcAG-cedU-B8my-xGk5-nlg6-2Khtre', 'scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b', 'scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AQIorT-mZph-jOfK-swZz-e1si-sNhx-b5mwDO', 'scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4', 'scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634', 'scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192706 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.192721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part1', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part14', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part15', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part16', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192805 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.192819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.192832 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.192849 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.192868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:53:14.192987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.193020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:53:14.193032 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.193043 | orchestrator | 2025-09-06 00:53:14.193055 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-06 00:53:14.193067 | orchestrator | Saturday 06 September 2025 00:42:40 +0000 (0:00:01.174) 0:00:29.640 **** 2025-09-06 00:53:14.193079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567', 'dm-uuid-LVM-r6f80mz9e22Vmz3H2GU0Ef84wrC6l1Ff93I1fJ96d512su5aeJbRbgGkDCiB9O2q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3', 'dm-uuid-LVM-HjJ7GKB5yLddflqwhdAdEzWzRwWiY2ZFQVTzzIqNyOhGoOnap4BNgRvCK8MrZYxN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6', 'dm-uuid-LVM-cu4va0YeCfZXWXc5bD75hcGN10dQTwekTMF8ZROPB9Y9NfccWc0R6zJniHlIFj9E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tnpbur-SzDW-WQ8q-U5AF-PBL6-Su6o-vobvpB', 'scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8', 'scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0', 'dm-uuid-LVM-fycm5QQlGOho71zbS5RzdZZtfc1SZaX2Hr30eDfIJy9FbnEjzTcsZcaeVbsXeROx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iCop39-Riup-vFef-zeMs-SIWe-bIEY-BLh0Jz', 'scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff', 'scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5', 'scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193391 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.193408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7DZGc-sTzo-CcSh-NBkn-EaEg-3VEw-R0yWte', 'scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74', 'scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qg95jI-e6Nq-hStd-2tYS-uIOn-9vf1-GjhhtD', 'scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba', 'scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7', 'scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f', 'dm-uuid-LVM-S1sgywPEkjpv9d0wsFPQU3cEbxfDfA6xyq1Srsvdb8p4ZPCF91EIdWz8Ul8NLFKG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709', 'dm-uuid-LVM-gsAMb0k6MRCpv6Q1MlP1kUCTMe8oIPXrC4bOdSsf658daatcLZ99by6ZGXXFUiqT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193589 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.193600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193659 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193708 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193719 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193773 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193787 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193822 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.193989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194009 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MmVSvD-xcAG-cedU-B8my-xGk5-nlg6-2Khtre', 'scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b', 'scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddf87fb6-4780-4596-86d4-c5a6d6af40b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AQIorT-mZph-jOfK-swZz-e1si-sNhx-b5mwDO', 'scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4', 'scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634', 'scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194219 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194256 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194289 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194332 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.194343 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.194369 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part1', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part14', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part15', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part16', 'scsi-SQEMU_QEMU_HARDDISK_40122d3f-0139-48c0-a1ea-e85093653425-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194382 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194394 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.194411 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194423 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194445 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194457 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194468 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194480 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194497 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194509 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194537 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c3f1420-3954-4bd3-a390-a5ebcf190ecf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194550 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:53:14.194561 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.194572 | orchestrator | 2025-09-06 00:53:14.194587 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-06 00:53:14.194600 | orchestrator | Saturday 06 September 2025 00:42:41 +0000 (0:00:00.868) 0:00:30.508 **** 2025-09-06 00:53:14.194618 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.194632 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.194645 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.194658 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.194671 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.194683 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.194695 | orchestrator | 2025-09-06 00:53:14.194708 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-06 00:53:14.194728 | orchestrator | Saturday 06 September 2025 00:42:42 +0000 (0:00:01.226) 0:00:31.734 **** 2025-09-06 00:53:14.194785 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.194800 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.194812 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.194826 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.194839 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.194850 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.194862 | orchestrator | 2025-09-06 00:53:14.194876 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-06 00:53:14.194889 | orchestrator | Saturday 06 September 2025 00:42:42 +0000 (0:00:00.609) 0:00:32.344 **** 2025-09-06 00:53:14.194901 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.194913 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.194926 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.194939 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.194952 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.194964 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.194975 | orchestrator | 2025-09-06 00:53:14.194985 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-06 00:53:14.194996 | orchestrator | Saturday 06 September 2025 00:42:43 +0000 (0:00:00.586) 0:00:32.931 **** 2025-09-06 00:53:14.195007 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.195018 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.195029 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.195044 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.195055 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.195066 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.195076 | orchestrator | 2025-09-06 00:53:14.195087 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-06 00:53:14.195098 | orchestrator | Saturday 06 September 2025 00:42:43 +0000 (0:00:00.459) 0:00:33.390 **** 2025-09-06 00:53:14.195108 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.195119 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.195129 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.195140 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.195151 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.195161 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.195171 | orchestrator | 2025-09-06 00:53:14.195182 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-06 00:53:14.195193 | orchestrator | Saturday 06 September 2025 00:42:45 +0000 (0:00:01.264) 0:00:34.655 **** 2025-09-06 00:53:14.195204 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.195214 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.195225 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.195236 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.195246 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.195257 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.195267 | orchestrator | 2025-09-06 00:53:14.195278 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-06 00:53:14.195289 | orchestrator | Saturday 06 September 2025 00:42:45 +0000 (0:00:00.594) 0:00:35.249 **** 2025-09-06 00:53:14.195300 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-06 00:53:14.195310 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-06 00:53:14.195321 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-06 00:53:14.195332 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-06 00:53:14.195342 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-06 00:53:14.195353 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-06 00:53:14.195363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-06 00:53:14.195374 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-06 00:53:14.195384 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-06 00:53:14.195402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-06 00:53:14.195413 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-06 00:53:14.195423 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-06 00:53:14.195434 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-06 00:53:14.195445 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-06 00:53:14.195455 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-06 00:53:14.195466 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-06 00:53:14.195476 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-06 00:53:14.195487 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-06 00:53:14.195497 | orchestrator | 2025-09-06 00:53:14.195508 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-06 00:53:14.195519 | orchestrator | Saturday 06 September 2025 00:42:49 +0000 (0:00:03.377) 0:00:38.626 **** 2025-09-06 00:53:14.195529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-06 00:53:14.195540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-06 00:53:14.195551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-06 00:53:14.195562 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.195572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-06 00:53:14.195583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-06 00:53:14.195593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-06 00:53:14.195604 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.195615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-06 00:53:14.195625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-06 00:53:14.195642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-06 00:53:14.195654 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.195664 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:53:14.195675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:53:14.195685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:53:14.195696 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.195707 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-06 00:53:14.195717 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-06 00:53:14.195728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-06 00:53:14.195761 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.195773 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-06 00:53:14.195784 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-06 00:53:14.195795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-06 00:53:14.195806 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.195816 | orchestrator | 2025-09-06 00:53:14.195827 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-06 00:53:14.195838 | orchestrator | Saturday 06 September 2025 00:42:50 +0000 (0:00:00.949) 0:00:39.576 **** 2025-09-06 00:53:14.195849 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.195859 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.195870 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.195882 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3, testbed-node-5 2025-09-06 00:53:14.195893 | orchestrator | 2025-09-06 00:53:14.195909 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-06 00:53:14.195920 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:00.835) 0:00:40.411 **** 2025-09-06 00:53:14.195931 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.195948 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.195959 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.195970 | orchestrator | 2025-09-06 00:53:14.195980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-06 00:53:14.195991 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:00.412) 0:00:40.823 **** 2025-09-06 00:53:14.196002 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196013 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.196024 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.196034 | orchestrator | 2025-09-06 00:53:14.196045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-06 00:53:14.196055 | orchestrator | Saturday 06 September 2025 00:42:51 +0000 (0:00:00.415) 0:00:41.239 **** 2025-09-06 00:53:14.196066 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196077 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.196087 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.196098 | orchestrator | 2025-09-06 00:53:14.196109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-06 00:53:14.196119 | orchestrator | Saturday 06 September 2025 00:42:52 +0000 (0:00:00.567) 0:00:41.806 **** 2025-09-06 00:53:14.196130 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.196141 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.196152 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.196162 | orchestrator | 2025-09-06 00:53:14.196173 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-06 00:53:14.196183 | orchestrator | Saturday 06 September 2025 00:42:52 +0000 (0:00:00.585) 0:00:42.392 **** 2025-09-06 00:53:14.196194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.196205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.196215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.196226 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196236 | orchestrator | 2025-09-06 00:53:14.196247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-06 00:53:14.196258 | orchestrator | Saturday 06 September 2025 00:42:53 +0000 (0:00:00.466) 0:00:42.859 **** 2025-09-06 00:53:14.196268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.196279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.196290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.196300 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196311 | orchestrator | 2025-09-06 00:53:14.196321 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-06 00:53:14.196332 | orchestrator | Saturday 06 September 2025 00:42:53 +0000 (0:00:00.332) 0:00:43.191 **** 2025-09-06 00:53:14.196343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.196354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.196364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.196375 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196385 | orchestrator | 2025-09-06 00:53:14.196396 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-06 00:53:14.196407 | orchestrator | Saturday 06 September 2025 00:42:54 +0000 (0:00:00.665) 0:00:43.857 **** 2025-09-06 00:53:14.196418 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.196428 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.196439 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.196449 | orchestrator | 2025-09-06 00:53:14.196460 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-06 00:53:14.196471 | orchestrator | Saturday 06 September 2025 00:42:55 +0000 (0:00:00.558) 0:00:44.415 **** 2025-09-06 00:53:14.196481 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-06 00:53:14.196492 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-06 00:53:14.196509 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-06 00:53:14.196519 | orchestrator | 2025-09-06 00:53:14.196536 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-06 00:53:14.196547 | orchestrator | Saturday 06 September 2025 00:42:56 +0000 (0:00:01.144) 0:00:45.560 **** 2025-09-06 00:53:14.196558 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:53:14.196569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.196579 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.196590 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-06 00:53:14.196601 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-06 00:53:14.196612 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-06 00:53:14.196622 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-06 00:53:14.196633 | orchestrator | 2025-09-06 00:53:14.196643 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-06 00:53:14.196654 | orchestrator | Saturday 06 September 2025 00:42:57 +0000 (0:00:00.885) 0:00:46.445 **** 2025-09-06 00:53:14.196665 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:53:14.196675 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.196686 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.196697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-06 00:53:14.196712 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-06 00:53:14.196723 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-06 00:53:14.196734 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-06 00:53:14.196773 | orchestrator | 2025-09-06 00:53:14.196792 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.196811 | orchestrator | Saturday 06 September 2025 00:42:59 +0000 (0:00:02.198) 0:00:48.643 **** 2025-09-06 00:53:14.196830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.196844 | orchestrator | 2025-09-06 00:53:14.196855 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.196866 | orchestrator | Saturday 06 September 2025 00:43:00 +0000 (0:00:01.613) 0:00:50.257 **** 2025-09-06 00:53:14.196877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.196887 | orchestrator | 2025-09-06 00:53:14.196898 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.196909 | orchestrator | Saturday 06 September 2025 00:43:01 +0000 (0:00:01.082) 0:00:51.340 **** 2025-09-06 00:53:14.196919 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.196930 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.196940 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.196951 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.196961 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.196972 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.196982 | orchestrator | 2025-09-06 00:53:14.196993 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.197004 | orchestrator | Saturday 06 September 2025 00:43:04 +0000 (0:00:02.102) 0:00:53.443 **** 2025-09-06 00:53:14.197015 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197025 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.197043 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.197054 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197064 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.197075 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197085 | orchestrator | 2025-09-06 00:53:14.197096 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.197107 | orchestrator | Saturday 06 September 2025 00:43:05 +0000 (0:00:01.300) 0:00:54.743 **** 2025-09-06 00:53:14.197118 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197129 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197139 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.197150 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.197160 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.197171 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197182 | orchestrator | 2025-09-06 00:53:14.197192 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.197203 | orchestrator | Saturday 06 September 2025 00:43:06 +0000 (0:00:00.803) 0:00:55.547 **** 2025-09-06 00:53:14.197214 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197224 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.197235 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197245 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.197256 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197267 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.197277 | orchestrator | 2025-09-06 00:53:14.197288 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.197299 | orchestrator | Saturday 06 September 2025 00:43:06 +0000 (0:00:00.792) 0:00:56.339 **** 2025-09-06 00:53:14.197310 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.197320 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.197331 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.197342 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.197352 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.197363 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.197374 | orchestrator | 2025-09-06 00:53:14.197385 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.197401 | orchestrator | Saturday 06 September 2025 00:43:08 +0000 (0:00:01.187) 0:00:57.526 **** 2025-09-06 00:53:14.197413 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.197424 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.197434 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.197445 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197455 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197466 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197476 | orchestrator | 2025-09-06 00:53:14.197487 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.197498 | orchestrator | Saturday 06 September 2025 00:43:08 +0000 (0:00:00.510) 0:00:58.037 **** 2025-09-06 00:53:14.197508 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.197519 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.197529 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.197540 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197550 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197561 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197571 | orchestrator | 2025-09-06 00:53:14.197582 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.197593 | orchestrator | Saturday 06 September 2025 00:43:09 +0000 (0:00:00.606) 0:00:58.643 **** 2025-09-06 00:53:14.197603 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.197614 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.197625 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.197635 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.197646 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.197656 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.197673 | orchestrator | 2025-09-06 00:53:14.197684 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.197695 | orchestrator | Saturday 06 September 2025 00:43:10 +0000 (0:00:01.152) 0:00:59.795 **** 2025-09-06 00:53:14.197711 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.197722 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.197732 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.197803 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.197817 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.197827 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.197838 | orchestrator | 2025-09-06 00:53:14.197849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.197859 | orchestrator | Saturday 06 September 2025 00:43:11 +0000 (0:00:01.421) 0:01:01.216 **** 2025-09-06 00:53:14.197870 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.197881 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.197892 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.197902 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.197913 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.197923 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.197934 | orchestrator | 2025-09-06 00:53:14.197945 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.197956 | orchestrator | Saturday 06 September 2025 00:43:12 +0000 (0:00:00.863) 0:01:02.080 **** 2025-09-06 00:53:14.197966 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.197977 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.197987 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.197998 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.198009 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.198051 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.198065 | orchestrator | 2025-09-06 00:53:14.198076 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.198087 | orchestrator | Saturday 06 September 2025 00:43:13 +0000 (0:00:00.713) 0:01:02.794 **** 2025-09-06 00:53:14.198098 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.198108 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.198119 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.198130 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.198140 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.198151 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.198162 | orchestrator | 2025-09-06 00:53:14.198173 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.198184 | orchestrator | Saturday 06 September 2025 00:43:14 +0000 (0:00:01.062) 0:01:03.857 **** 2025-09-06 00:53:14.198194 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.198205 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.198216 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.198227 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.198238 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.198248 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.198259 | orchestrator | 2025-09-06 00:53:14.198270 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.198280 | orchestrator | Saturday 06 September 2025 00:43:15 +0000 (0:00:00.994) 0:01:04.851 **** 2025-09-06 00:53:14.198291 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.198302 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.198315 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.198332 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.198348 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.198365 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.198375 | orchestrator | 2025-09-06 00:53:14.198385 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.198395 | orchestrator | Saturday 06 September 2025 00:43:16 +0000 (0:00:00.882) 0:01:05.734 **** 2025-09-06 00:53:14.198404 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.198421 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.198431 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.198441 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.198450 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.198459 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.198469 | orchestrator | 2025-09-06 00:53:14.198478 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.198488 | orchestrator | Saturday 06 September 2025 00:43:16 +0000 (0:00:00.525) 0:01:06.259 **** 2025-09-06 00:53:14.198497 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.198506 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.198516 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.198525 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.198534 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.198544 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.198553 | orchestrator | 2025-09-06 00:53:14.198577 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.198587 | orchestrator | Saturday 06 September 2025 00:43:17 +0000 (0:00:00.643) 0:01:06.903 **** 2025-09-06 00:53:14.198597 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.198606 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.198616 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.198625 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.198635 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.198644 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.198654 | orchestrator | 2025-09-06 00:53:14.198663 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.198673 | orchestrator | Saturday 06 September 2025 00:43:18 +0000 (0:00:00.501) 0:01:07.405 **** 2025-09-06 00:53:14.198682 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.198692 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.198702 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.198711 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.198720 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.198729 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.198791 | orchestrator | 2025-09-06 00:53:14.198803 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.198813 | orchestrator | Saturday 06 September 2025 00:43:18 +0000 (0:00:00.639) 0:01:08.044 **** 2025-09-06 00:53:14.198823 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.198832 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.198842 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.198851 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.198861 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.198870 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.198880 | orchestrator | 2025-09-06 00:53:14.198889 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-06 00:53:14.198899 | orchestrator | Saturday 06 September 2025 00:43:19 +0000 (0:00:01.166) 0:01:09.210 **** 2025-09-06 00:53:14.198915 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.198924 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.198934 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.198944 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.198953 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.198963 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.198972 | orchestrator | 2025-09-06 00:53:14.198982 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-06 00:53:14.198992 | orchestrator | Saturday 06 September 2025 00:43:21 +0000 (0:00:01.458) 0:01:10.669 **** 2025-09-06 00:53:14.199001 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.199011 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.199020 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.199029 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.199039 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.199055 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.199065 | orchestrator | 2025-09-06 00:53:14.199075 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-06 00:53:14.199084 | orchestrator | Saturday 06 September 2025 00:43:23 +0000 (0:00:02.362) 0:01:13.031 **** 2025-09-06 00:53:14.199094 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.199104 | orchestrator | 2025-09-06 00:53:14.199114 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-06 00:53:14.199124 | orchestrator | Saturday 06 September 2025 00:43:24 +0000 (0:00:01.133) 0:01:14.164 **** 2025-09-06 00:53:14.199133 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199143 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199152 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199162 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199172 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199181 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.199190 | orchestrator | 2025-09-06 00:53:14.199198 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-06 00:53:14.199206 | orchestrator | Saturday 06 September 2025 00:43:25 +0000 (0:00:00.615) 0:01:14.779 **** 2025-09-06 00:53:14.199214 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199221 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199229 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199237 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199245 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199253 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.199261 | orchestrator | 2025-09-06 00:53:14.199269 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-06 00:53:14.199277 | orchestrator | Saturday 06 September 2025 00:43:26 +0000 (0:00:00.829) 0:01:15.609 **** 2025-09-06 00:53:14.199285 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199293 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199301 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199308 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199316 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199324 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199332 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-06 00:53:14.199340 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199348 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199356 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199364 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199377 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-06 00:53:14.199385 | orchestrator | 2025-09-06 00:53:14.199393 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-06 00:53:14.199401 | orchestrator | Saturday 06 September 2025 00:43:27 +0000 (0:00:01.236) 0:01:16.845 **** 2025-09-06 00:53:14.199409 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.199417 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.199425 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.199433 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.199441 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.199448 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.199461 | orchestrator | 2025-09-06 00:53:14.199469 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-06 00:53:14.199477 | orchestrator | Saturday 06 September 2025 00:43:28 +0000 (0:00:01.361) 0:01:18.206 **** 2025-09-06 00:53:14.199485 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199493 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199501 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199508 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199516 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199524 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.199532 | orchestrator | 2025-09-06 00:53:14.199539 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-06 00:53:14.199547 | orchestrator | Saturday 06 September 2025 00:43:29 +0000 (0:00:00.567) 0:01:18.774 **** 2025-09-06 00:53:14.199555 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199564 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199571 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199579 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199587 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199595 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.199603 | orchestrator | 2025-09-06 00:53:14.199611 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-06 00:53:14.199619 | orchestrator | Saturday 06 September 2025 00:43:30 +0000 (0:00:00.920) 0:01:19.694 **** 2025-09-06 00:53:14.199627 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199635 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199642 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199650 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199658 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199666 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.199674 | orchestrator | 2025-09-06 00:53:14.199681 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-06 00:53:14.199690 | orchestrator | Saturday 06 September 2025 00:43:30 +0000 (0:00:00.566) 0:01:20.260 **** 2025-09-06 00:53:14.199698 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.199706 | orchestrator | 2025-09-06 00:53:14.199714 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-06 00:53:14.199722 | orchestrator | Saturday 06 September 2025 00:43:32 +0000 (0:00:01.152) 0:01:21.413 **** 2025-09-06 00:53:14.199729 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.199753 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.199764 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.199772 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.199779 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.199787 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.199795 | orchestrator | 2025-09-06 00:53:14.199803 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-06 00:53:14.199811 | orchestrator | Saturday 06 September 2025 00:44:58 +0000 (0:01:26.717) 0:02:48.131 **** 2025-09-06 00:53:14.199819 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199826 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.199834 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.199842 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.199850 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199858 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.199866 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.199873 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199890 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.199897 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.199905 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.199913 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.199921 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199929 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.199936 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.199944 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199952 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.199960 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.199968 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.199975 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.199983 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-06 00:53:14.199995 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-06 00:53:14.200004 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-06 00:53:14.200012 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200019 | orchestrator | 2025-09-06 00:53:14.200027 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-06 00:53:14.200062 | orchestrator | Saturday 06 September 2025 00:44:59 +0000 (0:00:01.175) 0:02:49.306 **** 2025-09-06 00:53:14.200071 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200079 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200086 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200094 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200102 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200110 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200118 | orchestrator | 2025-09-06 00:53:14.200125 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-06 00:53:14.200134 | orchestrator | Saturday 06 September 2025 00:45:00 +0000 (0:00:00.765) 0:02:50.071 **** 2025-09-06 00:53:14.200141 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200149 | orchestrator | 2025-09-06 00:53:14.200157 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-06 00:53:14.200165 | orchestrator | Saturday 06 September 2025 00:45:00 +0000 (0:00:00.162) 0:02:50.233 **** 2025-09-06 00:53:14.200172 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200180 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200188 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200196 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200203 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200211 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200219 | orchestrator | 2025-09-06 00:53:14.200227 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-06 00:53:14.200239 | orchestrator | Saturday 06 September 2025 00:45:01 +0000 (0:00:00.502) 0:02:50.736 **** 2025-09-06 00:53:14.200247 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200255 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200263 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200270 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200278 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200286 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200293 | orchestrator | 2025-09-06 00:53:14.200301 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-06 00:53:14.200309 | orchestrator | Saturday 06 September 2025 00:45:02 +0000 (0:00:00.699) 0:02:51.436 **** 2025-09-06 00:53:14.200322 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200330 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200337 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200345 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200353 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200360 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200368 | orchestrator | 2025-09-06 00:53:14.200376 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-06 00:53:14.200384 | orchestrator | Saturday 06 September 2025 00:45:02 +0000 (0:00:00.560) 0:02:51.996 **** 2025-09-06 00:53:14.200392 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.200400 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.200407 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.200415 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.200423 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.200431 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.200439 | orchestrator | 2025-09-06 00:53:14.200447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-06 00:53:14.200455 | orchestrator | Saturday 06 September 2025 00:45:05 +0000 (0:00:03.095) 0:02:55.091 **** 2025-09-06 00:53:14.200462 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.200470 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.200478 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.200486 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.200493 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.200501 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.200509 | orchestrator | 2025-09-06 00:53:14.200517 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-06 00:53:14.200525 | orchestrator | Saturday 06 September 2025 00:45:06 +0000 (0:00:00.762) 0:02:55.853 **** 2025-09-06 00:53:14.200533 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.200541 | orchestrator | 2025-09-06 00:53:14.200550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-06 00:53:14.200558 | orchestrator | Saturday 06 September 2025 00:45:07 +0000 (0:00:01.217) 0:02:57.070 **** 2025-09-06 00:53:14.200565 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200573 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200581 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200589 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200597 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200604 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200612 | orchestrator | 2025-09-06 00:53:14.200620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-06 00:53:14.200628 | orchestrator | Saturday 06 September 2025 00:45:08 +0000 (0:00:00.674) 0:02:57.745 **** 2025-09-06 00:53:14.200636 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200643 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200651 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200659 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200667 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200675 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200682 | orchestrator | 2025-09-06 00:53:14.200690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-06 00:53:14.200698 | orchestrator | Saturday 06 September 2025 00:45:08 +0000 (0:00:00.593) 0:02:58.338 **** 2025-09-06 00:53:14.200706 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200714 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200721 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200729 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200753 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200782 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200803 | orchestrator | 2025-09-06 00:53:14.200812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-06 00:53:14.200820 | orchestrator | Saturday 06 September 2025 00:45:09 +0000 (0:00:00.778) 0:02:59.116 **** 2025-09-06 00:53:14.200827 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200835 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200843 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200851 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200859 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200867 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200875 | orchestrator | 2025-09-06 00:53:14.200883 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-06 00:53:14.200891 | orchestrator | Saturday 06 September 2025 00:45:10 +0000 (0:00:00.844) 0:02:59.960 **** 2025-09-06 00:53:14.200898 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200906 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200914 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200922 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200929 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.200937 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.200945 | orchestrator | 2025-09-06 00:53:14.200953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-06 00:53:14.200960 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.846) 0:03:00.806 **** 2025-09-06 00:53:14.200969 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.200976 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.200984 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.200992 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.200999 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.201007 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.201015 | orchestrator | 2025-09-06 00:53:14.201027 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-06 00:53:14.201035 | orchestrator | Saturday 06 September 2025 00:45:11 +0000 (0:00:00.553) 0:03:01.360 **** 2025-09-06 00:53:14.201043 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.201051 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.201059 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.201067 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.201074 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.201082 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.201090 | orchestrator | 2025-09-06 00:53:14.201097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-06 00:53:14.201105 | orchestrator | Saturday 06 September 2025 00:45:12 +0000 (0:00:00.679) 0:03:02.040 **** 2025-09-06 00:53:14.201113 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.201121 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.201128 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.201136 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.201144 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.201152 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.201160 | orchestrator | 2025-09-06 00:53:14.201168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-06 00:53:14.201176 | orchestrator | Saturday 06 September 2025 00:45:13 +0000 (0:00:00.669) 0:03:02.710 **** 2025-09-06 00:53:14.201184 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.201192 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.201200 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.201207 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.201215 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.201223 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.201230 | orchestrator | 2025-09-06 00:53:14.201238 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-06 00:53:14.201246 | orchestrator | Saturday 06 September 2025 00:45:14 +0000 (0:00:00.985) 0:03:03.695 **** 2025-09-06 00:53:14.201259 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.201268 | orchestrator | 2025-09-06 00:53:14.201276 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-06 00:53:14.201284 | orchestrator | Saturday 06 September 2025 00:45:15 +0000 (0:00:01.030) 0:03:04.725 **** 2025-09-06 00:53:14.201292 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-06 00:53:14.201300 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-06 00:53:14.201307 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-06 00:53:14.201315 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-06 00:53:14.201323 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-06 00:53:14.201331 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201339 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-06 00:53:14.201346 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201354 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201369 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201377 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-06 00:53:14.201393 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201416 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201424 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201452 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-06 00:53:14.201460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201468 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201476 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201483 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-06 00:53:14.201491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201499 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201514 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201522 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201530 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201537 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201561 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201568 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-06 00:53:14.201592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201612 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201621 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201629 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201644 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-06 00:53:14.201652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201660 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201667 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201675 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201683 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201691 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-06 00:53:14.201698 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201714 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201721 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-06 00:53:14.201777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201806 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201814 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-06 00:53:14.201822 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201830 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201838 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201845 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201853 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201861 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201868 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-06 00:53:14.201875 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201882 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.201895 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.201901 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.201908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-06 00:53:14.201914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201921 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.201927 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.201934 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.201945 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.201952 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-06 00:53:14.201968 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.201976 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.201982 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-06 00:53:14.201989 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-06 00:53:14.201995 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-06 00:53:14.202002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-06 00:53:14.202009 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.202043 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-06 00:53:14.202052 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-06 00:53:14.202059 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-06 00:53:14.202065 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-06 00:53:14.202072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-06 00:53:14.202079 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-06 00:53:14.202086 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-06 00:53:14.202092 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-06 00:53:14.202099 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-06 00:53:14.202106 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-06 00:53:14.202112 | orchestrator | 2025-09-06 00:53:14.202124 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-06 00:53:14.202131 | orchestrator | Saturday 06 September 2025 00:45:22 +0000 (0:00:06.866) 0:03:11.592 **** 2025-09-06 00:53:14.202138 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202144 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202151 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202158 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.202165 | orchestrator | 2025-09-06 00:53:14.202171 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-06 00:53:14.202178 | orchestrator | Saturday 06 September 2025 00:45:23 +0000 (0:00:00.829) 0:03:12.422 **** 2025-09-06 00:53:14.202185 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202192 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202199 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202206 | orchestrator | 2025-09-06 00:53:14.202212 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-06 00:53:14.202219 | orchestrator | Saturday 06 September 2025 00:45:23 +0000 (0:00:00.631) 0:03:13.053 **** 2025-09-06 00:53:14.202226 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202233 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202240 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.202246 | orchestrator | 2025-09-06 00:53:14.202253 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-06 00:53:14.202260 | orchestrator | Saturday 06 September 2025 00:45:25 +0000 (0:00:01.694) 0:03:14.748 **** 2025-09-06 00:53:14.202267 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.202279 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.202286 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.202292 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202299 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202306 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202312 | orchestrator | 2025-09-06 00:53:14.202319 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-06 00:53:14.202325 | orchestrator | Saturday 06 September 2025 00:45:26 +0000 (0:00:01.157) 0:03:15.905 **** 2025-09-06 00:53:14.202332 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.202339 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.202345 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.202352 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202359 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202365 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202372 | orchestrator | 2025-09-06 00:53:14.202378 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-06 00:53:14.202385 | orchestrator | Saturday 06 September 2025 00:45:27 +0000 (0:00:01.037) 0:03:16.942 **** 2025-09-06 00:53:14.202392 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202398 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202405 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202411 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202418 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202424 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202431 | orchestrator | 2025-09-06 00:53:14.202438 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-06 00:53:14.202445 | orchestrator | Saturday 06 September 2025 00:45:28 +0000 (0:00:00.732) 0:03:17.675 **** 2025-09-06 00:53:14.202460 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202467 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202474 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202480 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202487 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202493 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202500 | orchestrator | 2025-09-06 00:53:14.202507 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-06 00:53:14.202513 | orchestrator | Saturday 06 September 2025 00:45:29 +0000 (0:00:00.929) 0:03:18.605 **** 2025-09-06 00:53:14.202520 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202526 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202533 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202539 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202546 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202552 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202559 | orchestrator | 2025-09-06 00:53:14.202566 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-06 00:53:14.202572 | orchestrator | Saturday 06 September 2025 00:45:29 +0000 (0:00:00.756) 0:03:19.361 **** 2025-09-06 00:53:14.202579 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202586 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202592 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202599 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202605 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202612 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202618 | orchestrator | 2025-09-06 00:53:14.202625 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-06 00:53:14.202631 | orchestrator | Saturday 06 September 2025 00:45:31 +0000 (0:00:01.263) 0:03:20.624 **** 2025-09-06 00:53:14.202641 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202648 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202655 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202661 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202673 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202679 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202686 | orchestrator | 2025-09-06 00:53:14.202693 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-06 00:53:14.202700 | orchestrator | Saturday 06 September 2025 00:45:31 +0000 (0:00:00.681) 0:03:21.306 **** 2025-09-06 00:53:14.202706 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202713 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202719 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202726 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202732 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202756 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202763 | orchestrator | 2025-09-06 00:53:14.202770 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-06 00:53:14.202777 | orchestrator | Saturday 06 September 2025 00:45:33 +0000 (0:00:01.277) 0:03:22.583 **** 2025-09-06 00:53:14.202783 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202790 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202796 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202803 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.202809 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.202816 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.202822 | orchestrator | 2025-09-06 00:53:14.202829 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-06 00:53:14.202836 | orchestrator | Saturday 06 September 2025 00:45:36 +0000 (0:00:03.031) 0:03:25.615 **** 2025-09-06 00:53:14.202843 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.202849 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.202856 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.202862 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202869 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202875 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202882 | orchestrator | 2025-09-06 00:53:14.202888 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-06 00:53:14.202895 | orchestrator | Saturday 06 September 2025 00:45:37 +0000 (0:00:00.797) 0:03:26.412 **** 2025-09-06 00:53:14.202902 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.202908 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.202915 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.202921 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202928 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202934 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202941 | orchestrator | 2025-09-06 00:53:14.202947 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-06 00:53:14.202954 | orchestrator | Saturday 06 September 2025 00:45:37 +0000 (0:00:00.848) 0:03:27.261 **** 2025-09-06 00:53:14.202960 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.202967 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.202973 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.202980 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.202986 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.202993 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.202999 | orchestrator | 2025-09-06 00:53:14.203006 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-06 00:53:14.203012 | orchestrator | Saturday 06 September 2025 00:45:38 +0000 (0:00:00.751) 0:03:28.013 **** 2025-09-06 00:53:14.203019 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.203026 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.203032 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.203043 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203050 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203056 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203063 | orchestrator | 2025-09-06 00:53:14.203073 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-06 00:53:14.203081 | orchestrator | Saturday 06 September 2025 00:45:39 +0000 (0:00:00.732) 0:03:28.745 **** 2025-09-06 00:53:14.203088 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-06 00:53:14.203097 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-06 00:53:14.203105 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-06 00:53:14.203112 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203125 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-06 00:53:14.203132 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203139 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-06 00:53:14.203146 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-06 00:53:14.203152 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203159 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203166 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203172 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203179 | orchestrator | 2025-09-06 00:53:14.203185 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-06 00:53:14.203192 | orchestrator | Saturday 06 September 2025 00:45:40 +0000 (0:00:01.454) 0:03:30.200 **** 2025-09-06 00:53:14.203199 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203205 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203212 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203218 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203225 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203231 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203238 | orchestrator | 2025-09-06 00:53:14.203244 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-06 00:53:14.203251 | orchestrator | Saturday 06 September 2025 00:45:41 +0000 (0:00:00.738) 0:03:30.938 **** 2025-09-06 00:53:14.203257 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203264 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203270 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203283 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203289 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203296 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203302 | orchestrator | 2025-09-06 00:53:14.203309 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-06 00:53:14.203316 | orchestrator | Saturday 06 September 2025 00:45:42 +0000 (0:00:00.725) 0:03:31.664 **** 2025-09-06 00:53:14.203322 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203329 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203335 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203341 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203348 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203354 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203361 | orchestrator | 2025-09-06 00:53:14.203368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-06 00:53:14.203374 | orchestrator | Saturday 06 September 2025 00:45:42 +0000 (0:00:00.590) 0:03:32.254 **** 2025-09-06 00:53:14.203381 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203387 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203394 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203400 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203407 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203413 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203420 | orchestrator | 2025-09-06 00:53:14.203426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-06 00:53:14.203433 | orchestrator | Saturday 06 September 2025 00:45:43 +0000 (0:00:00.635) 0:03:32.890 **** 2025-09-06 00:53:14.203439 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203449 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203456 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.203463 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203469 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203476 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203482 | orchestrator | 2025-09-06 00:53:14.203489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-06 00:53:14.203496 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:00.781) 0:03:33.672 **** 2025-09-06 00:53:14.203502 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.203509 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.203516 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.203522 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203529 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203535 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203542 | orchestrator | 2025-09-06 00:53:14.203548 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-06 00:53:14.203555 | orchestrator | Saturday 06 September 2025 00:45:44 +0000 (0:00:00.719) 0:03:34.392 **** 2025-09-06 00:53:14.203561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.203568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.203575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.203581 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203588 | orchestrator | 2025-09-06 00:53:14.203594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-06 00:53:14.203601 | orchestrator | Saturday 06 September 2025 00:45:45 +0000 (0:00:00.620) 0:03:35.012 **** 2025-09-06 00:53:14.203607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.203617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.203624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.203631 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203638 | orchestrator | 2025-09-06 00:53:14.203644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-06 00:53:14.203657 | orchestrator | Saturday 06 September 2025 00:45:45 +0000 (0:00:00.379) 0:03:35.392 **** 2025-09-06 00:53:14.203663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.203670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.203677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.203683 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203690 | orchestrator | 2025-09-06 00:53:14.203696 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-06 00:53:14.203703 | orchestrator | Saturday 06 September 2025 00:45:46 +0000 (0:00:00.407) 0:03:35.799 **** 2025-09-06 00:53:14.203709 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.203716 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.203723 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.203729 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203736 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203759 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203766 | orchestrator | 2025-09-06 00:53:14.203773 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-06 00:53:14.203779 | orchestrator | Saturday 06 September 2025 00:45:47 +0000 (0:00:00.716) 0:03:36.516 **** 2025-09-06 00:53:14.203786 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-06 00:53:14.203793 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-06 00:53:14.203799 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-06 00:53:14.203806 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-06 00:53:14.203812 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.203819 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-06 00:53:14.203825 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.203832 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-06 00:53:14.203838 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.203845 | orchestrator | 2025-09-06 00:53:14.203851 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-06 00:53:14.203858 | orchestrator | Saturday 06 September 2025 00:45:48 +0000 (0:00:01.681) 0:03:38.198 **** 2025-09-06 00:53:14.203865 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.203871 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.203878 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.203884 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.203891 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.203897 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.203904 | orchestrator | 2025-09-06 00:53:14.203910 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.203917 | orchestrator | Saturday 06 September 2025 00:45:50 +0000 (0:00:02.144) 0:03:40.342 **** 2025-09-06 00:53:14.203923 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.203930 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.203936 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.203943 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.203949 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.203956 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.203962 | orchestrator | 2025-09-06 00:53:14.203969 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-06 00:53:14.203975 | orchestrator | Saturday 06 September 2025 00:45:51 +0000 (0:00:00.918) 0:03:41.261 **** 2025-09-06 00:53:14.203982 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.203989 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.203995 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.204002 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.204008 | orchestrator | 2025-09-06 00:53:14.204015 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-06 00:53:14.204026 | orchestrator | Saturday 06 September 2025 00:45:52 +0000 (0:00:00.973) 0:03:42.235 **** 2025-09-06 00:53:14.204032 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.204039 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.204045 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.204052 | orchestrator | 2025-09-06 00:53:14.204063 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-06 00:53:14.204070 | orchestrator | Saturday 06 September 2025 00:45:53 +0000 (0:00:00.284) 0:03:42.519 **** 2025-09-06 00:53:14.204076 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.204083 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.204089 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.204096 | orchestrator | 2025-09-06 00:53:14.204103 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-06 00:53:14.204109 | orchestrator | Saturday 06 September 2025 00:45:54 +0000 (0:00:01.184) 0:03:43.704 **** 2025-09-06 00:53:14.204116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:53:14.204122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:53:14.204129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:53:14.204135 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.204142 | orchestrator | 2025-09-06 00:53:14.204149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-06 00:53:14.204155 | orchestrator | Saturday 06 September 2025 00:45:55 +0000 (0:00:00.909) 0:03:44.614 **** 2025-09-06 00:53:14.204162 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.204169 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.204175 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.204182 | orchestrator | 2025-09-06 00:53:14.204188 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-06 00:53:14.204195 | orchestrator | Saturday 06 September 2025 00:45:55 +0000 (0:00:00.597) 0:03:45.211 **** 2025-09-06 00:53:14.204201 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.204208 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.204214 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.204225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.204232 | orchestrator | 2025-09-06 00:53:14.204238 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-06 00:53:14.204245 | orchestrator | Saturday 06 September 2025 00:45:56 +0000 (0:00:00.845) 0:03:46.057 **** 2025-09-06 00:53:14.204252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.204258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.204265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.204271 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204278 | orchestrator | 2025-09-06 00:53:14.204284 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-06 00:53:14.204291 | orchestrator | Saturday 06 September 2025 00:45:57 +0000 (0:00:00.599) 0:03:46.657 **** 2025-09-06 00:53:14.204298 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204304 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.204311 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.204317 | orchestrator | 2025-09-06 00:53:14.204324 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-06 00:53:14.204330 | orchestrator | Saturday 06 September 2025 00:45:57 +0000 (0:00:00.540) 0:03:47.197 **** 2025-09-06 00:53:14.204337 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204344 | orchestrator | 2025-09-06 00:53:14.204350 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-06 00:53:14.204357 | orchestrator | Saturday 06 September 2025 00:45:58 +0000 (0:00:00.261) 0:03:47.458 **** 2025-09-06 00:53:14.204363 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204374 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.204381 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.204387 | orchestrator | 2025-09-06 00:53:14.204394 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-06 00:53:14.204400 | orchestrator | Saturday 06 September 2025 00:45:58 +0000 (0:00:00.327) 0:03:47.786 **** 2025-09-06 00:53:14.204407 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204413 | orchestrator | 2025-09-06 00:53:14.204420 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-06 00:53:14.204427 | orchestrator | Saturday 06 September 2025 00:45:58 +0000 (0:00:00.217) 0:03:48.003 **** 2025-09-06 00:53:14.204433 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204440 | orchestrator | 2025-09-06 00:53:14.204447 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-06 00:53:14.204454 | orchestrator | Saturday 06 September 2025 00:45:58 +0000 (0:00:00.226) 0:03:48.230 **** 2025-09-06 00:53:14.204460 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204467 | orchestrator | 2025-09-06 00:53:14.204473 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-06 00:53:14.204480 | orchestrator | Saturday 06 September 2025 00:45:58 +0000 (0:00:00.143) 0:03:48.373 **** 2025-09-06 00:53:14.204487 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204493 | orchestrator | 2025-09-06 00:53:14.204500 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-06 00:53:14.204506 | orchestrator | Saturday 06 September 2025 00:45:59 +0000 (0:00:00.220) 0:03:48.594 **** 2025-09-06 00:53:14.204513 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204520 | orchestrator | 2025-09-06 00:53:14.204526 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-06 00:53:14.204533 | orchestrator | Saturday 06 September 2025 00:45:59 +0000 (0:00:00.212) 0:03:48.806 **** 2025-09-06 00:53:14.204540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.204546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.204553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.204559 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204566 | orchestrator | 2025-09-06 00:53:14.204573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-06 00:53:14.204579 | orchestrator | Saturday 06 September 2025 00:46:00 +0000 (0:00:00.850) 0:03:49.657 **** 2025-09-06 00:53:14.204586 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204597 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.204603 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.204610 | orchestrator | 2025-09-06 00:53:14.204616 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-06 00:53:14.204623 | orchestrator | Saturday 06 September 2025 00:46:00 +0000 (0:00:00.310) 0:03:49.968 **** 2025-09-06 00:53:14.204630 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204636 | orchestrator | 2025-09-06 00:53:14.204643 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-06 00:53:14.204650 | orchestrator | Saturday 06 September 2025 00:46:00 +0000 (0:00:00.238) 0:03:50.207 **** 2025-09-06 00:53:14.204656 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204663 | orchestrator | 2025-09-06 00:53:14.204669 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-06 00:53:14.204676 | orchestrator | Saturday 06 September 2025 00:46:01 +0000 (0:00:00.210) 0:03:50.418 **** 2025-09-06 00:53:14.204683 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.204689 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.204696 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.204702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.204709 | orchestrator | 2025-09-06 00:53:14.204715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-06 00:53:14.204727 | orchestrator | Saturday 06 September 2025 00:46:02 +0000 (0:00:01.178) 0:03:51.596 **** 2025-09-06 00:53:14.204734 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.204762 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.204775 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.204785 | orchestrator | 2025-09-06 00:53:14.204795 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-06 00:53:14.204806 | orchestrator | Saturday 06 September 2025 00:46:02 +0000 (0:00:00.372) 0:03:51.968 **** 2025-09-06 00:53:14.204813 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.204820 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.204826 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.204833 | orchestrator | 2025-09-06 00:53:14.204839 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-06 00:53:14.204846 | orchestrator | Saturday 06 September 2025 00:46:03 +0000 (0:00:01.209) 0:03:53.177 **** 2025-09-06 00:53:14.204853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.204859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.204866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.204872 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.204879 | orchestrator | 2025-09-06 00:53:14.204885 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-06 00:53:14.204892 | orchestrator | Saturday 06 September 2025 00:46:04 +0000 (0:00:00.859) 0:03:54.037 **** 2025-09-06 00:53:14.204898 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.204905 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.204912 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.204918 | orchestrator | 2025-09-06 00:53:14.204925 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-06 00:53:14.204931 | orchestrator | Saturday 06 September 2025 00:46:05 +0000 (0:00:00.564) 0:03:54.602 **** 2025-09-06 00:53:14.204938 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.204944 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.204951 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.204957 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.204964 | orchestrator | 2025-09-06 00:53:14.204970 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-06 00:53:14.204977 | orchestrator | Saturday 06 September 2025 00:46:06 +0000 (0:00:01.027) 0:03:55.629 **** 2025-09-06 00:53:14.204984 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.204990 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.204997 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.205003 | orchestrator | 2025-09-06 00:53:14.205010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-06 00:53:14.205016 | orchestrator | Saturday 06 September 2025 00:46:06 +0000 (0:00:00.376) 0:03:56.006 **** 2025-09-06 00:53:14.205023 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.205029 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.205036 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.205043 | orchestrator | 2025-09-06 00:53:14.205049 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-06 00:53:14.205056 | orchestrator | Saturday 06 September 2025 00:46:08 +0000 (0:00:01.455) 0:03:57.462 **** 2025-09-06 00:53:14.205062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.205069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.205076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.205082 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.205089 | orchestrator | 2025-09-06 00:53:14.205095 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-06 00:53:14.205102 | orchestrator | Saturday 06 September 2025 00:46:08 +0000 (0:00:00.520) 0:03:57.982 **** 2025-09-06 00:53:14.205114 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.205120 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.205127 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.205134 | orchestrator | 2025-09-06 00:53:14.205140 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-06 00:53:14.205147 | orchestrator | Saturday 06 September 2025 00:46:08 +0000 (0:00:00.258) 0:03:58.241 **** 2025-09-06 00:53:14.205154 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.205160 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.205167 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.205173 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205180 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205186 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205193 | orchestrator | 2025-09-06 00:53:14.205199 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-06 00:53:14.205210 | orchestrator | Saturday 06 September 2025 00:46:09 +0000 (0:00:00.539) 0:03:58.781 **** 2025-09-06 00:53:14.205218 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.205224 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.205231 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.205237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.205244 | orchestrator | 2025-09-06 00:53:14.205251 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-06 00:53:14.205257 | orchestrator | Saturday 06 September 2025 00:46:10 +0000 (0:00:00.625) 0:03:59.406 **** 2025-09-06 00:53:14.205264 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205271 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205277 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205284 | orchestrator | 2025-09-06 00:53:14.205290 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-06 00:53:14.205297 | orchestrator | Saturday 06 September 2025 00:46:10 +0000 (0:00:00.380) 0:03:59.786 **** 2025-09-06 00:53:14.205304 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.205310 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.205317 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.205323 | orchestrator | 2025-09-06 00:53:14.205330 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-06 00:53:14.205337 | orchestrator | Saturday 06 September 2025 00:46:11 +0000 (0:00:01.182) 0:04:00.970 **** 2025-09-06 00:53:14.205343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:53:14.205350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:53:14.205356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:53:14.205366 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205373 | orchestrator | 2025-09-06 00:53:14.205380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-06 00:53:14.205387 | orchestrator | Saturday 06 September 2025 00:46:12 +0000 (0:00:00.559) 0:04:01.529 **** 2025-09-06 00:53:14.205393 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205400 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205406 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205413 | orchestrator | 2025-09-06 00:53:14.205420 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-06 00:53:14.205426 | orchestrator | 2025-09-06 00:53:14.205433 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.205439 | orchestrator | Saturday 06 September 2025 00:46:12 +0000 (0:00:00.483) 0:04:02.013 **** 2025-09-06 00:53:14.205446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.205453 | orchestrator | 2025-09-06 00:53:14.205459 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.205466 | orchestrator | Saturday 06 September 2025 00:46:13 +0000 (0:00:00.524) 0:04:02.538 **** 2025-09-06 00:53:14.205477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.205484 | orchestrator | 2025-09-06 00:53:14.205490 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.205497 | orchestrator | Saturday 06 September 2025 00:46:13 +0000 (0:00:00.381) 0:04:02.920 **** 2025-09-06 00:53:14.205503 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205510 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205517 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205523 | orchestrator | 2025-09-06 00:53:14.205530 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.205536 | orchestrator | Saturday 06 September 2025 00:46:14 +0000 (0:00:00.639) 0:04:03.559 **** 2025-09-06 00:53:14.205543 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205550 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205556 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205563 | orchestrator | 2025-09-06 00:53:14.205569 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.205576 | orchestrator | Saturday 06 September 2025 00:46:14 +0000 (0:00:00.414) 0:04:03.974 **** 2025-09-06 00:53:14.205582 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205589 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205596 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205602 | orchestrator | 2025-09-06 00:53:14.205609 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.205616 | orchestrator | Saturday 06 September 2025 00:46:14 +0000 (0:00:00.274) 0:04:04.248 **** 2025-09-06 00:53:14.205622 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205629 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205635 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205642 | orchestrator | 2025-09-06 00:53:14.205648 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.205655 | orchestrator | Saturday 06 September 2025 00:46:15 +0000 (0:00:00.267) 0:04:04.516 **** 2025-09-06 00:53:14.205662 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205668 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205675 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205681 | orchestrator | 2025-09-06 00:53:14.205688 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.205695 | orchestrator | Saturday 06 September 2025 00:46:15 +0000 (0:00:00.693) 0:04:05.210 **** 2025-09-06 00:53:14.205701 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205708 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205714 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205721 | orchestrator | 2025-09-06 00:53:14.205728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.205734 | orchestrator | Saturday 06 September 2025 00:46:16 +0000 (0:00:00.435) 0:04:05.646 **** 2025-09-06 00:53:14.205757 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205764 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205771 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205777 | orchestrator | 2025-09-06 00:53:14.205788 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.205795 | orchestrator | Saturday 06 September 2025 00:46:16 +0000 (0:00:00.295) 0:04:05.941 **** 2025-09-06 00:53:14.205801 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205808 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205815 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205821 | orchestrator | 2025-09-06 00:53:14.205828 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.205834 | orchestrator | Saturday 06 September 2025 00:46:17 +0000 (0:00:00.768) 0:04:06.710 **** 2025-09-06 00:53:14.205841 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205853 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205859 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205866 | orchestrator | 2025-09-06 00:53:14.205872 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.205879 | orchestrator | Saturday 06 September 2025 00:46:17 +0000 (0:00:00.683) 0:04:07.393 **** 2025-09-06 00:53:14.205886 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205892 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205899 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205905 | orchestrator | 2025-09-06 00:53:14.205912 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.205918 | orchestrator | Saturday 06 September 2025 00:46:18 +0000 (0:00:00.448) 0:04:07.842 **** 2025-09-06 00:53:14.205925 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.205931 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.205938 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.205944 | orchestrator | 2025-09-06 00:53:14.205950 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.205957 | orchestrator | Saturday 06 September 2025 00:46:18 +0000 (0:00:00.367) 0:04:08.209 **** 2025-09-06 00:53:14.205968 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.205974 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.205981 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.205987 | orchestrator | 2025-09-06 00:53:14.205994 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.206001 | orchestrator | Saturday 06 September 2025 00:46:19 +0000 (0:00:00.316) 0:04:08.526 **** 2025-09-06 00:53:14.206007 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.206014 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.206093 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.206101 | orchestrator | 2025-09-06 00:53:14.206108 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.206115 | orchestrator | Saturday 06 September 2025 00:46:19 +0000 (0:00:00.306) 0:04:08.833 **** 2025-09-06 00:53:14.206121 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.206128 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.206134 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.206141 | orchestrator | 2025-09-06 00:53:14.206148 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.206154 | orchestrator | Saturday 06 September 2025 00:46:19 +0000 (0:00:00.538) 0:04:09.372 **** 2025-09-06 00:53:14.206161 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.206168 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.206174 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.206181 | orchestrator | 2025-09-06 00:53:14.206187 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.206194 | orchestrator | Saturday 06 September 2025 00:46:20 +0000 (0:00:00.359) 0:04:09.731 **** 2025-09-06 00:53:14.206201 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.206207 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.206214 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.206220 | orchestrator | 2025-09-06 00:53:14.206227 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.206234 | orchestrator | Saturday 06 September 2025 00:46:20 +0000 (0:00:00.341) 0:04:10.072 **** 2025-09-06 00:53:14.206240 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206247 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206253 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206260 | orchestrator | 2025-09-06 00:53:14.206267 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.206274 | orchestrator | Saturday 06 September 2025 00:46:21 +0000 (0:00:00.369) 0:04:10.442 **** 2025-09-06 00:53:14.206280 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206287 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206299 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206306 | orchestrator | 2025-09-06 00:53:14.206312 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.206319 | orchestrator | Saturday 06 September 2025 00:46:21 +0000 (0:00:00.346) 0:04:10.789 **** 2025-09-06 00:53:14.206326 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206332 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206339 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206345 | orchestrator | 2025-09-06 00:53:14.206352 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-06 00:53:14.206359 | orchestrator | Saturday 06 September 2025 00:46:22 +0000 (0:00:00.878) 0:04:11.667 **** 2025-09-06 00:53:14.206365 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206372 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206379 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206385 | orchestrator | 2025-09-06 00:53:14.206392 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-06 00:53:14.206398 | orchestrator | Saturday 06 September 2025 00:46:22 +0000 (0:00:00.346) 0:04:12.014 **** 2025-09-06 00:53:14.206405 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.206412 | orchestrator | 2025-09-06 00:53:14.206418 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-06 00:53:14.206425 | orchestrator | Saturday 06 September 2025 00:46:23 +0000 (0:00:00.750) 0:04:12.765 **** 2025-09-06 00:53:14.206432 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.206438 | orchestrator | 2025-09-06 00:53:14.206445 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-06 00:53:14.206475 | orchestrator | Saturday 06 September 2025 00:46:23 +0000 (0:00:00.158) 0:04:12.923 **** 2025-09-06 00:53:14.206483 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-06 00:53:14.206490 | orchestrator | 2025-09-06 00:53:14.206497 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-06 00:53:14.206503 | orchestrator | Saturday 06 September 2025 00:46:24 +0000 (0:00:00.888) 0:04:13.812 **** 2025-09-06 00:53:14.206510 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206517 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206523 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206530 | orchestrator | 2025-09-06 00:53:14.206536 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-06 00:53:14.206543 | orchestrator | Saturday 06 September 2025 00:46:24 +0000 (0:00:00.324) 0:04:14.136 **** 2025-09-06 00:53:14.206550 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206556 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206563 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206569 | orchestrator | 2025-09-06 00:53:14.206576 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-06 00:53:14.206582 | orchestrator | Saturday 06 September 2025 00:46:25 +0000 (0:00:00.314) 0:04:14.451 **** 2025-09-06 00:53:14.206589 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.206596 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.206602 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.206609 | orchestrator | 2025-09-06 00:53:14.206615 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-06 00:53:14.206622 | orchestrator | Saturday 06 September 2025 00:46:26 +0000 (0:00:01.398) 0:04:15.849 **** 2025-09-06 00:53:14.206629 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.206635 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.206642 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.206648 | orchestrator | 2025-09-06 00:53:14.206655 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-06 00:53:14.206668 | orchestrator | Saturday 06 September 2025 00:46:27 +0000 (0:00:00.695) 0:04:16.545 **** 2025-09-06 00:53:14.206675 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.206682 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.206693 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.206700 | orchestrator | 2025-09-06 00:53:14.206707 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-06 00:53:14.206713 | orchestrator | Saturday 06 September 2025 00:46:27 +0000 (0:00:00.656) 0:04:17.201 **** 2025-09-06 00:53:14.206720 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206727 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.206733 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.206779 | orchestrator | 2025-09-06 00:53:14.206788 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-06 00:53:14.206795 | orchestrator | Saturday 06 September 2025 00:46:28 +0000 (0:00:00.735) 0:04:17.937 **** 2025-09-06 00:53:14.206801 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.206808 | orchestrator | 2025-09-06 00:53:14.206815 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-06 00:53:14.206821 | orchestrator | Saturday 06 September 2025 00:46:29 +0000 (0:00:01.407) 0:04:19.344 **** 2025-09-06 00:53:14.206828 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.206835 | orchestrator | 2025-09-06 00:53:14.206841 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-06 00:53:14.206848 | orchestrator | Saturday 06 September 2025 00:46:30 +0000 (0:00:00.750) 0:04:20.095 **** 2025-09-06 00:53:14.206855 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.206861 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.206868 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.206875 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:53:14.206881 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-06 00:53:14.206888 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:53:14.206895 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:53:14.206901 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-06 00:53:14.206908 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-06 00:53:14.206915 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-06 00:53:14.206921 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:53:14.206928 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-06 00:53:14.206935 | orchestrator | 2025-09-06 00:53:14.206941 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-06 00:53:14.206948 | orchestrator | Saturday 06 September 2025 00:46:34 +0000 (0:00:03.946) 0:04:24.042 **** 2025-09-06 00:53:14.206955 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.206961 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.206968 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.206975 | orchestrator | 2025-09-06 00:53:14.206981 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-06 00:53:14.206988 | orchestrator | Saturday 06 September 2025 00:46:35 +0000 (0:00:01.296) 0:04:25.338 **** 2025-09-06 00:53:14.206995 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.207001 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.207008 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207015 | orchestrator | 2025-09-06 00:53:14.207021 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-06 00:53:14.207028 | orchestrator | Saturday 06 September 2025 00:46:36 +0000 (0:00:00.388) 0:04:25.727 **** 2025-09-06 00:53:14.207035 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207041 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.207048 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.207055 | orchestrator | 2025-09-06 00:53:14.207061 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-06 00:53:14.207068 | orchestrator | Saturday 06 September 2025 00:46:36 +0000 (0:00:00.417) 0:04:26.145 **** 2025-09-06 00:53:14.207079 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207086 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207093 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207100 | orchestrator | 2025-09-06 00:53:14.207130 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-06 00:53:14.207138 | orchestrator | Saturday 06 September 2025 00:46:38 +0000 (0:00:02.228) 0:04:28.373 **** 2025-09-06 00:53:14.207145 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207152 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207158 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207165 | orchestrator | 2025-09-06 00:53:14.207172 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-06 00:53:14.207178 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:01.248) 0:04:29.622 **** 2025-09-06 00:53:14.207185 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207192 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207198 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207205 | orchestrator | 2025-09-06 00:53:14.207211 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-06 00:53:14.207218 | orchestrator | Saturday 06 September 2025 00:46:40 +0000 (0:00:00.322) 0:04:29.944 **** 2025-09-06 00:53:14.207225 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.207232 | orchestrator | 2025-09-06 00:53:14.207238 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-06 00:53:14.207245 | orchestrator | Saturday 06 September 2025 00:46:41 +0000 (0:00:00.598) 0:04:30.542 **** 2025-09-06 00:53:14.207252 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207258 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207265 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207271 | orchestrator | 2025-09-06 00:53:14.207278 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-06 00:53:14.207289 | orchestrator | Saturday 06 September 2025 00:46:41 +0000 (0:00:00.271) 0:04:30.814 **** 2025-09-06 00:53:14.207295 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207302 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207308 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207314 | orchestrator | 2025-09-06 00:53:14.207320 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-06 00:53:14.207327 | orchestrator | Saturday 06 September 2025 00:46:41 +0000 (0:00:00.284) 0:04:31.098 **** 2025-09-06 00:53:14.207333 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.207339 | orchestrator | 2025-09-06 00:53:14.207345 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-06 00:53:14.207351 | orchestrator | Saturday 06 September 2025 00:46:42 +0000 (0:00:00.463) 0:04:31.562 **** 2025-09-06 00:53:14.207357 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207364 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207370 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207376 | orchestrator | 2025-09-06 00:53:14.207382 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-06 00:53:14.207388 | orchestrator | Saturday 06 September 2025 00:46:44 +0000 (0:00:01.838) 0:04:33.400 **** 2025-09-06 00:53:14.207395 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207401 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207407 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207413 | orchestrator | 2025-09-06 00:53:14.207419 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-06 00:53:14.207426 | orchestrator | Saturday 06 September 2025 00:46:45 +0000 (0:00:01.156) 0:04:34.557 **** 2025-09-06 00:53:14.207432 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207438 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207444 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207454 | orchestrator | 2025-09-06 00:53:14.207461 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-06 00:53:14.207467 | orchestrator | Saturday 06 September 2025 00:46:47 +0000 (0:00:01.902) 0:04:36.460 **** 2025-09-06 00:53:14.207473 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.207479 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.207485 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.207491 | orchestrator | 2025-09-06 00:53:14.207497 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-06 00:53:14.207504 | orchestrator | Saturday 06 September 2025 00:46:49 +0000 (0:00:02.248) 0:04:38.708 **** 2025-09-06 00:53:14.207510 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.207516 | orchestrator | 2025-09-06 00:53:14.207522 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-06 00:53:14.207528 | orchestrator | Saturday 06 September 2025 00:46:50 +0000 (0:00:01.136) 0:04:39.845 **** 2025-09-06 00:53:14.207534 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207541 | orchestrator | 2025-09-06 00:53:14.207547 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-06 00:53:14.207553 | orchestrator | Saturday 06 September 2025 00:46:51 +0000 (0:00:01.305) 0:04:41.151 **** 2025-09-06 00:53:14.207559 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207565 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.207571 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.207577 | orchestrator | 2025-09-06 00:53:14.207583 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-06 00:53:14.207590 | orchestrator | Saturday 06 September 2025 00:47:01 +0000 (0:00:09.718) 0:04:50.869 **** 2025-09-06 00:53:14.207596 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207602 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207608 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207614 | orchestrator | 2025-09-06 00:53:14.207620 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-06 00:53:14.207626 | orchestrator | Saturday 06 September 2025 00:47:01 +0000 (0:00:00.314) 0:04:51.183 **** 2025-09-06 00:53:14.207650 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-06 00:53:14.207660 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-06 00:53:14.207667 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-06 00:53:14.207678 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-06 00:53:14.207685 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-06 00:53:14.207697 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d5c6f001cc771927e83a5999c8031af51e8c2466'}])  2025-09-06 00:53:14.207704 | orchestrator | 2025-09-06 00:53:14.207710 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.207716 | orchestrator | Saturday 06 September 2025 00:47:18 +0000 (0:00:16.321) 0:05:07.505 **** 2025-09-06 00:53:14.207723 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207729 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207735 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207756 | orchestrator | 2025-09-06 00:53:14.207763 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-06 00:53:14.207769 | orchestrator | Saturday 06 September 2025 00:47:18 +0000 (0:00:00.368) 0:05:07.874 **** 2025-09-06 00:53:14.207776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.207782 | orchestrator | 2025-09-06 00:53:14.207788 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-06 00:53:14.207794 | orchestrator | Saturday 06 September 2025 00:47:19 +0000 (0:00:00.808) 0:05:08.683 **** 2025-09-06 00:53:14.207800 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207806 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.207812 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.207819 | orchestrator | 2025-09-06 00:53:14.207825 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-06 00:53:14.207831 | orchestrator | Saturday 06 September 2025 00:47:19 +0000 (0:00:00.359) 0:05:09.042 **** 2025-09-06 00:53:14.207837 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207843 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.207849 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.207855 | orchestrator | 2025-09-06 00:53:14.207862 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-06 00:53:14.207868 | orchestrator | Saturday 06 September 2025 00:47:19 +0000 (0:00:00.343) 0:05:09.385 **** 2025-09-06 00:53:14.207874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:53:14.207880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:53:14.207886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:53:14.207892 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.207899 | orchestrator | 2025-09-06 00:53:14.207905 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-06 00:53:14.207911 | orchestrator | Saturday 06 September 2025 00:47:20 +0000 (0:00:00.881) 0:05:10.267 **** 2025-09-06 00:53:14.207917 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.207923 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.207930 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.207936 | orchestrator | 2025-09-06 00:53:14.207942 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-06 00:53:14.207948 | orchestrator | 2025-09-06 00:53:14.207955 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.207980 | orchestrator | Saturday 06 September 2025 00:47:21 +0000 (0:00:00.839) 0:05:11.107 **** 2025-09-06 00:53:14.207987 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.207994 | orchestrator | 2025-09-06 00:53:14.208005 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.208011 | orchestrator | Saturday 06 September 2025 00:47:22 +0000 (0:00:00.518) 0:05:11.625 **** 2025-09-06 00:53:14.208018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-06 00:53:14.208024 | orchestrator | 2025-09-06 00:53:14.208030 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.208036 | orchestrator | Saturday 06 September 2025 00:47:23 +0000 (0:00:00.775) 0:05:12.401 **** 2025-09-06 00:53:14.208042 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208049 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208055 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208061 | orchestrator | 2025-09-06 00:53:14.208067 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.208073 | orchestrator | Saturday 06 September 2025 00:47:23 +0000 (0:00:00.882) 0:05:13.283 **** 2025-09-06 00:53:14.208080 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208086 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208092 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208098 | orchestrator | 2025-09-06 00:53:14.208104 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.208111 | orchestrator | Saturday 06 September 2025 00:47:24 +0000 (0:00:00.341) 0:05:13.624 **** 2025-09-06 00:53:14.208117 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208126 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208133 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208139 | orchestrator | 2025-09-06 00:53:14.208145 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.208151 | orchestrator | Saturday 06 September 2025 00:47:24 +0000 (0:00:00.301) 0:05:13.926 **** 2025-09-06 00:53:14.208157 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208164 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208170 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208176 | orchestrator | 2025-09-06 00:53:14.208182 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.208188 | orchestrator | Saturday 06 September 2025 00:47:25 +0000 (0:00:00.505) 0:05:14.431 **** 2025-09-06 00:53:14.208194 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208201 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208207 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208213 | orchestrator | 2025-09-06 00:53:14.208219 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.208225 | orchestrator | Saturday 06 September 2025 00:47:25 +0000 (0:00:00.743) 0:05:15.175 **** 2025-09-06 00:53:14.208231 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208237 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208244 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208250 | orchestrator | 2025-09-06 00:53:14.208256 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.208262 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:00.287) 0:05:15.462 **** 2025-09-06 00:53:14.208268 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208274 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208281 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208287 | orchestrator | 2025-09-06 00:53:14.208293 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.208299 | orchestrator | Saturday 06 September 2025 00:47:26 +0000 (0:00:00.272) 0:05:15.734 **** 2025-09-06 00:53:14.208305 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208311 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208317 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208323 | orchestrator | 2025-09-06 00:53:14.208330 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.208336 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.888) 0:05:16.623 **** 2025-09-06 00:53:14.208346 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208352 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208358 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208364 | orchestrator | 2025-09-06 00:53:14.208371 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.208377 | orchestrator | Saturday 06 September 2025 00:47:27 +0000 (0:00:00.753) 0:05:17.377 **** 2025-09-06 00:53:14.208383 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208389 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208395 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208401 | orchestrator | 2025-09-06 00:53:14.208407 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.208413 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.270) 0:05:17.647 **** 2025-09-06 00:53:14.208420 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208426 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208432 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208438 | orchestrator | 2025-09-06 00:53:14.208444 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.208450 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.282) 0:05:17.929 **** 2025-09-06 00:53:14.208456 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208463 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208469 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208475 | orchestrator | 2025-09-06 00:53:14.208481 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.208487 | orchestrator | Saturday 06 September 2025 00:47:28 +0000 (0:00:00.438) 0:05:18.368 **** 2025-09-06 00:53:14.208493 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208499 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208506 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208512 | orchestrator | 2025-09-06 00:53:14.208518 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.208542 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.281) 0:05:18.650 **** 2025-09-06 00:53:14.208549 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208555 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208561 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208568 | orchestrator | 2025-09-06 00:53:14.208574 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.208580 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.271) 0:05:18.921 **** 2025-09-06 00:53:14.208586 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208593 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208599 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208605 | orchestrator | 2025-09-06 00:53:14.208611 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.208617 | orchestrator | Saturday 06 September 2025 00:47:29 +0000 (0:00:00.276) 0:05:19.198 **** 2025-09-06 00:53:14.208624 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208630 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208636 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208642 | orchestrator | 2025-09-06 00:53:14.208648 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.208654 | orchestrator | Saturday 06 September 2025 00:47:30 +0000 (0:00:00.280) 0:05:19.478 **** 2025-09-06 00:53:14.208661 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208667 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208673 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208679 | orchestrator | 2025-09-06 00:53:14.208685 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.208691 | orchestrator | Saturday 06 September 2025 00:47:30 +0000 (0:00:00.434) 0:05:19.912 **** 2025-09-06 00:53:14.208702 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208708 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208718 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208724 | orchestrator | 2025-09-06 00:53:14.208730 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.208736 | orchestrator | Saturday 06 September 2025 00:47:30 +0000 (0:00:00.238) 0:05:20.151 **** 2025-09-06 00:53:14.208762 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.208773 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.208784 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.208794 | orchestrator | 2025-09-06 00:53:14.208803 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-06 00:53:14.208814 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.398) 0:05:20.549 **** 2025-09-06 00:53:14.208821 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-06 00:53:14.208827 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.208833 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.208839 | orchestrator | 2025-09-06 00:53:14.208845 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-06 00:53:14.208851 | orchestrator | Saturday 06 September 2025 00:47:31 +0000 (0:00:00.657) 0:05:21.207 **** 2025-09-06 00:53:14.208858 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.208864 | orchestrator | 2025-09-06 00:53:14.208870 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-06 00:53:14.208876 | orchestrator | Saturday 06 September 2025 00:47:32 +0000 (0:00:00.590) 0:05:21.798 **** 2025-09-06 00:53:14.208882 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.208888 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.208895 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.208901 | orchestrator | 2025-09-06 00:53:14.208907 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-06 00:53:14.208913 | orchestrator | Saturday 06 September 2025 00:47:33 +0000 (0:00:00.674) 0:05:22.472 **** 2025-09-06 00:53:14.208919 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.208925 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.208931 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.208937 | orchestrator | 2025-09-06 00:53:14.208943 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-06 00:53:14.208949 | orchestrator | Saturday 06 September 2025 00:47:33 +0000 (0:00:00.293) 0:05:22.766 **** 2025-09-06 00:53:14.208956 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.208962 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.208968 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.208974 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-06 00:53:14.208980 | orchestrator | 2025-09-06 00:53:14.208986 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-06 00:53:14.208992 | orchestrator | Saturday 06 September 2025 00:47:44 +0000 (0:00:11.518) 0:05:34.284 **** 2025-09-06 00:53:14.208999 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.209005 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.209011 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.209017 | orchestrator | 2025-09-06 00:53:14.209023 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-06 00:53:14.209029 | orchestrator | Saturday 06 September 2025 00:47:45 +0000 (0:00:00.391) 0:05:34.676 **** 2025-09-06 00:53:14.209035 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-06 00:53:14.209042 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-06 00:53:14.209048 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-06 00:53:14.209054 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.209060 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.209071 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.209077 | orchestrator | 2025-09-06 00:53:14.209083 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-06 00:53:14.209090 | orchestrator | Saturday 06 September 2025 00:47:47 +0000 (0:00:02.195) 0:05:36.871 **** 2025-09-06 00:53:14.209116 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-06 00:53:14.209124 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-06 00:53:14.209131 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-06 00:53:14.209137 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 00:53:14.209143 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-06 00:53:14.209149 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-06 00:53:14.209155 | orchestrator | 2025-09-06 00:53:14.209162 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-06 00:53:14.209168 | orchestrator | Saturday 06 September 2025 00:47:48 +0000 (0:00:01.211) 0:05:38.083 **** 2025-09-06 00:53:14.209174 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.209180 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.209186 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.209193 | orchestrator | 2025-09-06 00:53:14.209199 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-06 00:53:14.209205 | orchestrator | Saturday 06 September 2025 00:47:49 +0000 (0:00:00.631) 0:05:38.714 **** 2025-09-06 00:53:14.209211 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.209217 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.209223 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.209229 | orchestrator | 2025-09-06 00:53:14.209236 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-06 00:53:14.209242 | orchestrator | Saturday 06 September 2025 00:47:49 +0000 (0:00:00.640) 0:05:39.354 **** 2025-09-06 00:53:14.209248 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.209254 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.209260 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.209266 | orchestrator | 2025-09-06 00:53:14.209273 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-06 00:53:14.209283 | orchestrator | Saturday 06 September 2025 00:47:50 +0000 (0:00:00.346) 0:05:39.701 **** 2025-09-06 00:53:14.209289 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.209295 | orchestrator | 2025-09-06 00:53:14.209301 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-06 00:53:14.209308 | orchestrator | Saturday 06 September 2025 00:47:50 +0000 (0:00:00.520) 0:05:40.221 **** 2025-09-06 00:53:14.209314 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.209320 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.209326 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.209332 | orchestrator | 2025-09-06 00:53:14.209338 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-06 00:53:14.209344 | orchestrator | Saturday 06 September 2025 00:47:51 +0000 (0:00:00.575) 0:05:40.797 **** 2025-09-06 00:53:14.209350 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.209357 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.209363 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.209369 | orchestrator | 2025-09-06 00:53:14.209375 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-06 00:53:14.209381 | orchestrator | Saturday 06 September 2025 00:47:51 +0000 (0:00:00.339) 0:05:41.136 **** 2025-09-06 00:53:14.209387 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.209393 | orchestrator | 2025-09-06 00:53:14.209400 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-06 00:53:14.209410 | orchestrator | Saturday 06 September 2025 00:47:52 +0000 (0:00:00.520) 0:05:41.656 **** 2025-09-06 00:53:14.209416 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209423 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209429 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209435 | orchestrator | 2025-09-06 00:53:14.209441 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-06 00:53:14.209447 | orchestrator | Saturday 06 September 2025 00:47:53 +0000 (0:00:01.541) 0:05:43.198 **** 2025-09-06 00:53:14.209453 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209459 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209465 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209471 | orchestrator | 2025-09-06 00:53:14.209478 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-06 00:53:14.209484 | orchestrator | Saturday 06 September 2025 00:47:54 +0000 (0:00:01.196) 0:05:44.394 **** 2025-09-06 00:53:14.209490 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209496 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209502 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209508 | orchestrator | 2025-09-06 00:53:14.209514 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-06 00:53:14.209520 | orchestrator | Saturday 06 September 2025 00:47:56 +0000 (0:00:01.719) 0:05:46.114 **** 2025-09-06 00:53:14.209527 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209533 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209539 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209545 | orchestrator | 2025-09-06 00:53:14.209551 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-06 00:53:14.209557 | orchestrator | Saturday 06 September 2025 00:47:58 +0000 (0:00:02.192) 0:05:48.307 **** 2025-09-06 00:53:14.209563 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.209569 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.209575 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-06 00:53:14.209582 | orchestrator | 2025-09-06 00:53:14.209588 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-06 00:53:14.209594 | orchestrator | Saturday 06 September 2025 00:47:59 +0000 (0:00:00.698) 0:05:49.005 **** 2025-09-06 00:53:14.209600 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-06 00:53:14.209606 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-06 00:53:14.209630 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-06 00:53:14.209637 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-06 00:53:14.209643 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.209649 | orchestrator | 2025-09-06 00:53:14.209656 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-06 00:53:14.209662 | orchestrator | Saturday 06 September 2025 00:48:24 +0000 (0:00:24.479) 0:06:13.485 **** 2025-09-06 00:53:14.209668 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.209674 | orchestrator | 2025-09-06 00:53:14.209680 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-06 00:53:14.209687 | orchestrator | Saturday 06 September 2025 00:48:25 +0000 (0:00:01.254) 0:06:14.739 **** 2025-09-06 00:53:14.209693 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.209699 | orchestrator | 2025-09-06 00:53:14.209705 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-06 00:53:14.209711 | orchestrator | Saturday 06 September 2025 00:48:25 +0000 (0:00:00.359) 0:06:15.099 **** 2025-09-06 00:53:14.209718 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.209724 | orchestrator | 2025-09-06 00:53:14.209730 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-06 00:53:14.209759 | orchestrator | Saturday 06 September 2025 00:48:25 +0000 (0:00:00.150) 0:06:15.250 **** 2025-09-06 00:53:14.209767 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-06 00:53:14.209773 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-06 00:53:14.209783 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-06 00:53:14.209789 | orchestrator | 2025-09-06 00:53:14.209795 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-06 00:53:14.209801 | orchestrator | Saturday 06 September 2025 00:48:33 +0000 (0:00:07.402) 0:06:22.652 **** 2025-09-06 00:53:14.209808 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-06 00:53:14.209814 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-06 00:53:14.209820 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-06 00:53:14.209826 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-06 00:53:14.209832 | orchestrator | 2025-09-06 00:53:14.209838 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.209845 | orchestrator | Saturday 06 September 2025 00:48:38 +0000 (0:00:05.138) 0:06:27.790 **** 2025-09-06 00:53:14.209851 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209857 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209863 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209869 | orchestrator | 2025-09-06 00:53:14.209875 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-06 00:53:14.209882 | orchestrator | Saturday 06 September 2025 00:48:39 +0000 (0:00:00.705) 0:06:28.496 **** 2025-09-06 00:53:14.209888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.209894 | orchestrator | 2025-09-06 00:53:14.209900 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-06 00:53:14.209906 | orchestrator | Saturday 06 September 2025 00:48:39 +0000 (0:00:00.510) 0:06:29.007 **** 2025-09-06 00:53:14.209913 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.209919 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.209925 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.209931 | orchestrator | 2025-09-06 00:53:14.209937 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-06 00:53:14.209944 | orchestrator | Saturday 06 September 2025 00:48:40 +0000 (0:00:00.571) 0:06:29.579 **** 2025-09-06 00:53:14.209950 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.209956 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.209962 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.209968 | orchestrator | 2025-09-06 00:53:14.209974 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-06 00:53:14.209980 | orchestrator | Saturday 06 September 2025 00:48:41 +0000 (0:00:01.177) 0:06:30.756 **** 2025-09-06 00:53:14.209987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-06 00:53:14.209993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-06 00:53:14.209999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-06 00:53:14.210005 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.210011 | orchestrator | 2025-09-06 00:53:14.210044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-06 00:53:14.210051 | orchestrator | Saturday 06 September 2025 00:48:41 +0000 (0:00:00.603) 0:06:31.360 **** 2025-09-06 00:53:14.210057 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.210064 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.210070 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.210076 | orchestrator | 2025-09-06 00:53:14.210082 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-06 00:53:14.210093 | orchestrator | 2025-09-06 00:53:14.210099 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.210106 | orchestrator | Saturday 06 September 2025 00:48:42 +0000 (0:00:00.571) 0:06:31.931 **** 2025-09-06 00:53:14.210112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.210118 | orchestrator | 2025-09-06 00:53:14.210124 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.210131 | orchestrator | Saturday 06 September 2025 00:48:43 +0000 (0:00:00.818) 0:06:32.750 **** 2025-09-06 00:53:14.210159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.210167 | orchestrator | 2025-09-06 00:53:14.210173 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.210179 | orchestrator | Saturday 06 September 2025 00:48:43 +0000 (0:00:00.517) 0:06:33.267 **** 2025-09-06 00:53:14.210185 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210191 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210198 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210204 | orchestrator | 2025-09-06 00:53:14.210210 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.210216 | orchestrator | Saturday 06 September 2025 00:48:44 +0000 (0:00:00.553) 0:06:33.821 **** 2025-09-06 00:53:14.210222 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210228 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210234 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210240 | orchestrator | 2025-09-06 00:53:14.210246 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.210253 | orchestrator | Saturday 06 September 2025 00:48:45 +0000 (0:00:00.673) 0:06:34.495 **** 2025-09-06 00:53:14.210259 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210265 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210271 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210277 | orchestrator | 2025-09-06 00:53:14.210283 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.210289 | orchestrator | Saturday 06 September 2025 00:48:45 +0000 (0:00:00.712) 0:06:35.208 **** 2025-09-06 00:53:14.210295 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210301 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210307 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210314 | orchestrator | 2025-09-06 00:53:14.210320 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.210329 | orchestrator | Saturday 06 September 2025 00:48:46 +0000 (0:00:00.668) 0:06:35.876 **** 2025-09-06 00:53:14.210336 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210342 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210349 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210355 | orchestrator | 2025-09-06 00:53:14.210361 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.210367 | orchestrator | Saturday 06 September 2025 00:48:47 +0000 (0:00:00.648) 0:06:36.525 **** 2025-09-06 00:53:14.210373 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210379 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210385 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210391 | orchestrator | 2025-09-06 00:53:14.210398 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.210404 | orchestrator | Saturday 06 September 2025 00:48:47 +0000 (0:00:00.312) 0:06:36.838 **** 2025-09-06 00:53:14.210410 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210416 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210422 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210428 | orchestrator | 2025-09-06 00:53:14.210434 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.210440 | orchestrator | Saturday 06 September 2025 00:48:47 +0000 (0:00:00.370) 0:06:37.208 **** 2025-09-06 00:53:14.210451 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210457 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210463 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210469 | orchestrator | 2025-09-06 00:53:14.210475 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.210482 | orchestrator | Saturday 06 September 2025 00:48:48 +0000 (0:00:00.672) 0:06:37.881 **** 2025-09-06 00:53:14.210488 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210494 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210500 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210506 | orchestrator | 2025-09-06 00:53:14.210512 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.210518 | orchestrator | Saturday 06 September 2025 00:48:49 +0000 (0:00:01.067) 0:06:38.948 **** 2025-09-06 00:53:14.210524 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210531 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210537 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210543 | orchestrator | 2025-09-06 00:53:14.210549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.210555 | orchestrator | Saturday 06 September 2025 00:48:49 +0000 (0:00:00.311) 0:06:39.259 **** 2025-09-06 00:53:14.210561 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210567 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210574 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210580 | orchestrator | 2025-09-06 00:53:14.210586 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.210592 | orchestrator | Saturday 06 September 2025 00:48:50 +0000 (0:00:00.326) 0:06:39.586 **** 2025-09-06 00:53:14.210598 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210605 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210611 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210617 | orchestrator | 2025-09-06 00:53:14.210623 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.210629 | orchestrator | Saturday 06 September 2025 00:48:50 +0000 (0:00:00.346) 0:06:39.933 **** 2025-09-06 00:53:14.210635 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210641 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210647 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210653 | orchestrator | 2025-09-06 00:53:14.210659 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.210666 | orchestrator | Saturday 06 September 2025 00:48:51 +0000 (0:00:00.611) 0:06:40.545 **** 2025-09-06 00:53:14.210672 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210678 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210684 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210690 | orchestrator | 2025-09-06 00:53:14.210696 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.210703 | orchestrator | Saturday 06 September 2025 00:48:51 +0000 (0:00:00.351) 0:06:40.896 **** 2025-09-06 00:53:14.210709 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210715 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210721 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210727 | orchestrator | 2025-09-06 00:53:14.210737 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.210781 | orchestrator | Saturday 06 September 2025 00:48:51 +0000 (0:00:00.314) 0:06:41.210 **** 2025-09-06 00:53:14.210788 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210794 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210800 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210806 | orchestrator | 2025-09-06 00:53:14.210812 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.210818 | orchestrator | Saturday 06 September 2025 00:48:52 +0000 (0:00:00.317) 0:06:41.528 **** 2025-09-06 00:53:14.210825 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.210835 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.210841 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.210847 | orchestrator | 2025-09-06 00:53:14.210853 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.210860 | orchestrator | Saturday 06 September 2025 00:48:52 +0000 (0:00:00.564) 0:06:42.092 **** 2025-09-06 00:53:14.210866 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210872 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210878 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210884 | orchestrator | 2025-09-06 00:53:14.210890 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.210897 | orchestrator | Saturday 06 September 2025 00:48:53 +0000 (0:00:00.333) 0:06:42.426 **** 2025-09-06 00:53:14.210903 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210909 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210915 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210921 | orchestrator | 2025-09-06 00:53:14.210927 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-06 00:53:14.210933 | orchestrator | Saturday 06 September 2025 00:48:53 +0000 (0:00:00.543) 0:06:42.970 **** 2025-09-06 00:53:14.210943 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.210949 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.210955 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.210961 | orchestrator | 2025-09-06 00:53:14.210967 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-06 00:53:14.210974 | orchestrator | Saturday 06 September 2025 00:48:54 +0000 (0:00:00.627) 0:06:43.597 **** 2025-09-06 00:53:14.210980 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:53:14.210986 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:53:14.210992 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:53:14.210998 | orchestrator | 2025-09-06 00:53:14.211005 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-06 00:53:14.211011 | orchestrator | Saturday 06 September 2025 00:48:54 +0000 (0:00:00.683) 0:06:44.280 **** 2025-09-06 00:53:14.211017 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.211023 | orchestrator | 2025-09-06 00:53:14.211029 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-06 00:53:14.211035 | orchestrator | Saturday 06 September 2025 00:48:55 +0000 (0:00:00.537) 0:06:44.818 **** 2025-09-06 00:53:14.211041 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211047 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.211054 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.211060 | orchestrator | 2025-09-06 00:53:14.211066 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-06 00:53:14.211072 | orchestrator | Saturday 06 September 2025 00:48:55 +0000 (0:00:00.298) 0:06:45.117 **** 2025-09-06 00:53:14.211078 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211084 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.211090 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.211096 | orchestrator | 2025-09-06 00:53:14.211102 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-06 00:53:14.211109 | orchestrator | Saturday 06 September 2025 00:48:56 +0000 (0:00:00.574) 0:06:45.691 **** 2025-09-06 00:53:14.211115 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.211121 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.211127 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.211133 | orchestrator | 2025-09-06 00:53:14.211139 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-06 00:53:14.211145 | orchestrator | Saturday 06 September 2025 00:48:56 +0000 (0:00:00.612) 0:06:46.304 **** 2025-09-06 00:53:14.211152 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.211163 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.211169 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.211175 | orchestrator | 2025-09-06 00:53:14.211182 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-06 00:53:14.211188 | orchestrator | Saturday 06 September 2025 00:48:57 +0000 (0:00:00.334) 0:06:46.638 **** 2025-09-06 00:53:14.211194 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-06 00:53:14.211200 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-06 00:53:14.211206 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-06 00:53:14.211212 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-06 00:53:14.211218 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-06 00:53:14.211225 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-06 00:53:14.211231 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-06 00:53:14.211237 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-06 00:53:14.211247 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-06 00:53:14.211254 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-06 00:53:14.211260 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-06 00:53:14.211266 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-06 00:53:14.211272 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-06 00:53:14.211278 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-06 00:53:14.211284 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-06 00:53:14.211290 | orchestrator | 2025-09-06 00:53:14.211296 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-06 00:53:14.211302 | orchestrator | Saturday 06 September 2025 00:48:59 +0000 (0:00:02.137) 0:06:48.776 **** 2025-09-06 00:53:14.211308 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211315 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.211320 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.211326 | orchestrator | 2025-09-06 00:53:14.211331 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-06 00:53:14.211339 | orchestrator | Saturday 06 September 2025 00:48:59 +0000 (0:00:00.612) 0:06:49.389 **** 2025-09-06 00:53:14.211348 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.211358 | orchestrator | 2025-09-06 00:53:14.211372 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-06 00:53:14.211388 | orchestrator | Saturday 06 September 2025 00:49:00 +0000 (0:00:00.622) 0:06:50.011 **** 2025-09-06 00:53:14.211396 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-06 00:53:14.211405 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-06 00:53:14.211414 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-06 00:53:14.211422 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-06 00:53:14.211430 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-06 00:53:14.211440 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-06 00:53:14.211449 | orchestrator | 2025-09-06 00:53:14.211457 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-06 00:53:14.211465 | orchestrator | Saturday 06 September 2025 00:49:01 +0000 (0:00:00.988) 0:06:50.999 **** 2025-09-06 00:53:14.211480 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.211490 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.211499 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.211508 | orchestrator | 2025-09-06 00:53:14.211516 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-06 00:53:14.211526 | orchestrator | Saturday 06 September 2025 00:49:04 +0000 (0:00:02.508) 0:06:53.508 **** 2025-09-06 00:53:14.211535 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 00:53:14.211541 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.211546 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.211552 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 00:53:14.211557 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-06 00:53:14.211562 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.211568 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 00:53:14.211573 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-06 00:53:14.211578 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.211584 | orchestrator | 2025-09-06 00:53:14.211589 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-06 00:53:14.211594 | orchestrator | Saturday 06 September 2025 00:49:05 +0000 (0:00:01.442) 0:06:54.951 **** 2025-09-06 00:53:14.211600 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.211605 | orchestrator | 2025-09-06 00:53:14.211611 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-06 00:53:14.211616 | orchestrator | Saturday 06 September 2025 00:49:07 +0000 (0:00:02.068) 0:06:57.020 **** 2025-09-06 00:53:14.211621 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.211627 | orchestrator | 2025-09-06 00:53:14.211632 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-06 00:53:14.211637 | orchestrator | Saturday 06 September 2025 00:49:08 +0000 (0:00:00.539) 0:06:57.559 **** 2025-09-06 00:53:14.211643 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f', 'data_vg': 'ceph-6f5e0d3a-48d2-5dc7-b4c5-38e7a8a8ed6f'}) 2025-09-06 00:53:14.211649 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567', 'data_vg': 'ceph-6c2b7b83-cfe0-5d78-88e9-40d3d3c4d567'}) 2025-09-06 00:53:14.211655 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9969153-fa79-5368-8c16-a33775dfe5f6', 'data_vg': 'ceph-e9969153-fa79-5368-8c16-a33775dfe5f6'}) 2025-09-06 00:53:14.211660 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d801673f-a74f-56ad-ad0d-e97588ff4709', 'data_vg': 'ceph-d801673f-a74f-56ad-ad0d-e97588ff4709'}) 2025-09-06 00:53:14.211670 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e6b4ea58-4fde-56e5-979f-346e927a82c3', 'data_vg': 'ceph-e6b4ea58-4fde-56e5-979f-346e927a82c3'}) 2025-09-06 00:53:14.211676 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-473d4611-c66c-5516-9b6d-fd0b18ba2fe0', 'data_vg': 'ceph-473d4611-c66c-5516-9b6d-fd0b18ba2fe0'}) 2025-09-06 00:53:14.211682 | orchestrator | 2025-09-06 00:53:14.211687 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-06 00:53:14.211693 | orchestrator | Saturday 06 September 2025 00:49:51 +0000 (0:00:42.918) 0:07:40.477 **** 2025-09-06 00:53:14.211698 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211703 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.211709 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.211714 | orchestrator | 2025-09-06 00:53:14.211720 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-06 00:53:14.211725 | orchestrator | Saturday 06 September 2025 00:49:51 +0000 (0:00:00.328) 0:07:40.806 **** 2025-09-06 00:53:14.211735 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.211756 | orchestrator | 2025-09-06 00:53:14.211762 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-06 00:53:14.211768 | orchestrator | Saturday 06 September 2025 00:49:51 +0000 (0:00:00.530) 0:07:41.336 **** 2025-09-06 00:53:14.211773 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.211778 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.211784 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.211789 | orchestrator | 2025-09-06 00:53:14.211794 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-06 00:53:14.211800 | orchestrator | Saturday 06 September 2025 00:49:52 +0000 (0:00:00.933) 0:07:42.270 **** 2025-09-06 00:53:14.211805 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.211814 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.211820 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.211825 | orchestrator | 2025-09-06 00:53:14.211831 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-06 00:53:14.211836 | orchestrator | Saturday 06 September 2025 00:49:55 +0000 (0:00:02.509) 0:07:44.780 **** 2025-09-06 00:53:14.211841 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.211847 | orchestrator | 2025-09-06 00:53:14.211852 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-06 00:53:14.211858 | orchestrator | Saturday 06 September 2025 00:49:55 +0000 (0:00:00.488) 0:07:45.269 **** 2025-09-06 00:53:14.211863 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.211868 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.211874 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.211879 | orchestrator | 2025-09-06 00:53:14.211884 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-06 00:53:14.211890 | orchestrator | Saturday 06 September 2025 00:49:57 +0000 (0:00:01.474) 0:07:46.744 **** 2025-09-06 00:53:14.211895 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.211900 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.211906 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.211911 | orchestrator | 2025-09-06 00:53:14.211916 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-06 00:53:14.211922 | orchestrator | Saturday 06 September 2025 00:49:58 +0000 (0:00:01.156) 0:07:47.900 **** 2025-09-06 00:53:14.211927 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.211932 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.211938 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.211943 | orchestrator | 2025-09-06 00:53:14.211948 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-06 00:53:14.211954 | orchestrator | Saturday 06 September 2025 00:50:00 +0000 (0:00:01.697) 0:07:49.597 **** 2025-09-06 00:53:14.211959 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211964 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.211969 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.211975 | orchestrator | 2025-09-06 00:53:14.211980 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-06 00:53:14.211985 | orchestrator | Saturday 06 September 2025 00:50:00 +0000 (0:00:00.343) 0:07:49.940 **** 2025-09-06 00:53:14.211991 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.211996 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212001 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212007 | orchestrator | 2025-09-06 00:53:14.212012 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-06 00:53:14.212017 | orchestrator | Saturday 06 September 2025 00:50:01 +0000 (0:00:00.582) 0:07:50.523 **** 2025-09-06 00:53:14.212023 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-06 00:53:14.212028 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-06 00:53:14.212037 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-06 00:53:14.212042 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-06 00:53:14.212047 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-06 00:53:14.212053 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-06 00:53:14.212058 | orchestrator | 2025-09-06 00:53:14.212063 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-06 00:53:14.212069 | orchestrator | Saturday 06 September 2025 00:50:02 +0000 (0:00:01.039) 0:07:51.562 **** 2025-09-06 00:53:14.212074 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-06 00:53:14.212079 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-06 00:53:14.212084 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-06 00:53:14.212090 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-06 00:53:14.212095 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-06 00:53:14.212100 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-06 00:53:14.212106 | orchestrator | 2025-09-06 00:53:14.212111 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-06 00:53:14.212116 | orchestrator | Saturday 06 September 2025 00:50:04 +0000 (0:00:02.179) 0:07:53.742 **** 2025-09-06 00:53:14.212122 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-06 00:53:14.212127 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-06 00:53:14.212135 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-06 00:53:14.212141 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-06 00:53:14.212146 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-06 00:53:14.212152 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-06 00:53:14.212157 | orchestrator | 2025-09-06 00:53:14.212162 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-06 00:53:14.212168 | orchestrator | Saturday 06 September 2025 00:50:08 +0000 (0:00:04.270) 0:07:58.012 **** 2025-09-06 00:53:14.212173 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212178 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.212189 | orchestrator | 2025-09-06 00:53:14.212194 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-06 00:53:14.212200 | orchestrator | Saturday 06 September 2025 00:50:11 +0000 (0:00:02.748) 0:08:00.761 **** 2025-09-06 00:53:14.212205 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212211 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212216 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-06 00:53:14.212221 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.212227 | orchestrator | 2025-09-06 00:53:14.212232 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-06 00:53:14.212238 | orchestrator | Saturday 06 September 2025 00:50:23 +0000 (0:00:12.553) 0:08:13.314 **** 2025-09-06 00:53:14.212243 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212248 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212253 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212259 | orchestrator | 2025-09-06 00:53:14.212267 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.212273 | orchestrator | Saturday 06 September 2025 00:50:24 +0000 (0:00:01.082) 0:08:14.396 **** 2025-09-06 00:53:14.212278 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212284 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212289 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212294 | orchestrator | 2025-09-06 00:53:14.212300 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-06 00:53:14.212305 | orchestrator | Saturday 06 September 2025 00:50:25 +0000 (0:00:00.359) 0:08:14.756 **** 2025-09-06 00:53:14.212310 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.212320 | orchestrator | 2025-09-06 00:53:14.212326 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-06 00:53:14.212331 | orchestrator | Saturday 06 September 2025 00:50:25 +0000 (0:00:00.556) 0:08:15.312 **** 2025-09-06 00:53:14.212337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.212342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.212347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.212353 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212358 | orchestrator | 2025-09-06 00:53:14.212363 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-06 00:53:14.212369 | orchestrator | Saturday 06 September 2025 00:50:26 +0000 (0:00:00.669) 0:08:15.982 **** 2025-09-06 00:53:14.212374 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212379 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212385 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212390 | orchestrator | 2025-09-06 00:53:14.212395 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-06 00:53:14.212401 | orchestrator | Saturday 06 September 2025 00:50:27 +0000 (0:00:00.597) 0:08:16.579 **** 2025-09-06 00:53:14.212406 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212411 | orchestrator | 2025-09-06 00:53:14.212417 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-06 00:53:14.212422 | orchestrator | Saturday 06 September 2025 00:50:27 +0000 (0:00:00.255) 0:08:16.835 **** 2025-09-06 00:53:14.212428 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212433 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212438 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212444 | orchestrator | 2025-09-06 00:53:14.212449 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-06 00:53:14.212454 | orchestrator | Saturday 06 September 2025 00:50:27 +0000 (0:00:00.336) 0:08:17.172 **** 2025-09-06 00:53:14.212460 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212465 | orchestrator | 2025-09-06 00:53:14.212470 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-06 00:53:14.212476 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.241) 0:08:17.414 **** 2025-09-06 00:53:14.212481 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212487 | orchestrator | 2025-09-06 00:53:14.212492 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-06 00:53:14.212497 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.244) 0:08:17.659 **** 2025-09-06 00:53:14.212503 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212508 | orchestrator | 2025-09-06 00:53:14.212514 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-06 00:53:14.212519 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.126) 0:08:17.786 **** 2025-09-06 00:53:14.212524 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212530 | orchestrator | 2025-09-06 00:53:14.212535 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-06 00:53:14.212540 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.222) 0:08:18.008 **** 2025-09-06 00:53:14.212546 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212551 | orchestrator | 2025-09-06 00:53:14.212556 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-06 00:53:14.212562 | orchestrator | Saturday 06 September 2025 00:50:28 +0000 (0:00:00.220) 0:08:18.228 **** 2025-09-06 00:53:14.212570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.212576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.212581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.212587 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212592 | orchestrator | 2025-09-06 00:53:14.212597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-06 00:53:14.212609 | orchestrator | Saturday 06 September 2025 00:50:29 +0000 (0:00:00.712) 0:08:18.941 **** 2025-09-06 00:53:14.212614 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212620 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212625 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212630 | orchestrator | 2025-09-06 00:53:14.212636 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-06 00:53:14.212641 | orchestrator | Saturday 06 September 2025 00:50:30 +0000 (0:00:00.598) 0:08:19.539 **** 2025-09-06 00:53:14.212647 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212652 | orchestrator | 2025-09-06 00:53:14.212657 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-06 00:53:14.212663 | orchestrator | Saturday 06 September 2025 00:50:30 +0000 (0:00:00.225) 0:08:19.765 **** 2025-09-06 00:53:14.212668 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212674 | orchestrator | 2025-09-06 00:53:14.212679 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-06 00:53:14.212684 | orchestrator | 2025-09-06 00:53:14.212690 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.212695 | orchestrator | Saturday 06 September 2025 00:50:31 +0000 (0:00:00.649) 0:08:20.414 **** 2025-09-06 00:53:14.212704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.212711 | orchestrator | 2025-09-06 00:53:14.212717 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.212722 | orchestrator | Saturday 06 September 2025 00:50:32 +0000 (0:00:01.283) 0:08:21.698 **** 2025-09-06 00:53:14.212728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.212733 | orchestrator | 2025-09-06 00:53:14.212753 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.212759 | orchestrator | Saturday 06 September 2025 00:50:33 +0000 (0:00:01.313) 0:08:23.011 **** 2025-09-06 00:53:14.212765 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212770 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212776 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212781 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.212786 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.212792 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.212797 | orchestrator | 2025-09-06 00:53:14.212802 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.212808 | orchestrator | Saturday 06 September 2025 00:50:34 +0000 (0:00:01.305) 0:08:24.317 **** 2025-09-06 00:53:14.212813 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.212819 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.212824 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.212829 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.212835 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.212840 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.212845 | orchestrator | 2025-09-06 00:53:14.212851 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.212856 | orchestrator | Saturday 06 September 2025 00:50:35 +0000 (0:00:00.734) 0:08:25.051 **** 2025-09-06 00:53:14.212862 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.212867 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.212872 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.212878 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.212883 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.212888 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.212894 | orchestrator | 2025-09-06 00:53:14.212899 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.212909 | orchestrator | Saturday 06 September 2025 00:50:36 +0000 (0:00:00.902) 0:08:25.954 **** 2025-09-06 00:53:14.212915 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.212920 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.212925 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.212930 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.212936 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.212941 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.212946 | orchestrator | 2025-09-06 00:53:14.212952 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.212957 | orchestrator | Saturday 06 September 2025 00:50:37 +0000 (0:00:00.740) 0:08:26.694 **** 2025-09-06 00:53:14.212962 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.212968 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.212973 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.212979 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.212984 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.212989 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.212994 | orchestrator | 2025-09-06 00:53:14.213000 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.213005 | orchestrator | Saturday 06 September 2025 00:50:38 +0000 (0:00:01.315) 0:08:28.009 **** 2025-09-06 00:53:14.213011 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213016 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213022 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213027 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213032 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213037 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213043 | orchestrator | 2025-09-06 00:53:14.213048 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.213053 | orchestrator | Saturday 06 September 2025 00:50:39 +0000 (0:00:00.601) 0:08:28.611 **** 2025-09-06 00:53:14.213062 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213067 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213072 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213078 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213083 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213089 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213094 | orchestrator | 2025-09-06 00:53:14.213099 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.213105 | orchestrator | Saturday 06 September 2025 00:50:40 +0000 (0:00:00.852) 0:08:29.464 **** 2025-09-06 00:53:14.213110 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213115 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213121 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213126 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213131 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213137 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213142 | orchestrator | 2025-09-06 00:53:14.213148 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.213153 | orchestrator | Saturday 06 September 2025 00:50:41 +0000 (0:00:01.038) 0:08:30.502 **** 2025-09-06 00:53:14.213159 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213164 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213169 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213174 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213180 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213185 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213190 | orchestrator | 2025-09-06 00:53:14.213196 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.213201 | orchestrator | Saturday 06 September 2025 00:50:42 +0000 (0:00:01.358) 0:08:31.860 **** 2025-09-06 00:53:14.213206 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213212 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213217 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213226 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213235 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213240 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213246 | orchestrator | 2025-09-06 00:53:14.213251 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.213256 | orchestrator | Saturday 06 September 2025 00:50:43 +0000 (0:00:00.643) 0:08:32.504 **** 2025-09-06 00:53:14.213262 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213267 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213273 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213278 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213283 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213288 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213294 | orchestrator | 2025-09-06 00:53:14.213299 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.213305 | orchestrator | Saturday 06 September 2025 00:50:44 +0000 (0:00:00.951) 0:08:33.456 **** 2025-09-06 00:53:14.213310 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213315 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213321 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213326 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213331 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213337 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213342 | orchestrator | 2025-09-06 00:53:14.213347 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.213353 | orchestrator | Saturday 06 September 2025 00:50:44 +0000 (0:00:00.634) 0:08:34.090 **** 2025-09-06 00:53:14.213358 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213364 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213369 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213374 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213379 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213385 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213390 | orchestrator | 2025-09-06 00:53:14.213395 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.213401 | orchestrator | Saturday 06 September 2025 00:50:45 +0000 (0:00:01.074) 0:08:35.165 **** 2025-09-06 00:53:14.213406 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213412 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213417 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213422 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213428 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213433 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213438 | orchestrator | 2025-09-06 00:53:14.213444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.213449 | orchestrator | Saturday 06 September 2025 00:50:46 +0000 (0:00:00.654) 0:08:35.819 **** 2025-09-06 00:53:14.213454 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213460 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213465 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213470 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213476 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213481 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213486 | orchestrator | 2025-09-06 00:53:14.213492 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.213497 | orchestrator | Saturday 06 September 2025 00:50:47 +0000 (0:00:00.823) 0:08:36.642 **** 2025-09-06 00:53:14.213502 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213508 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213513 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213518 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:53:14.213524 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:53:14.213529 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:53:14.213538 | orchestrator | 2025-09-06 00:53:14.213543 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.213549 | orchestrator | Saturday 06 September 2025 00:50:47 +0000 (0:00:00.581) 0:08:37.223 **** 2025-09-06 00:53:14.213554 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.213559 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.213565 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.213570 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213575 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213581 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213586 | orchestrator | 2025-09-06 00:53:14.213592 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.213600 | orchestrator | Saturday 06 September 2025 00:50:48 +0000 (0:00:00.835) 0:08:38.059 **** 2025-09-06 00:53:14.213606 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213611 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213617 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213622 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213627 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213633 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213638 | orchestrator | 2025-09-06 00:53:14.213643 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.213649 | orchestrator | Saturday 06 September 2025 00:50:49 +0000 (0:00:00.625) 0:08:38.684 **** 2025-09-06 00:53:14.213654 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.213660 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.213665 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.213670 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213675 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.213681 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.213686 | orchestrator | 2025-09-06 00:53:14.213691 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-06 00:53:14.213697 | orchestrator | Saturday 06 September 2025 00:50:50 +0000 (0:00:01.343) 0:08:40.028 **** 2025-09-06 00:53:14.213702 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.213708 | orchestrator | 2025-09-06 00:53:14.213713 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-06 00:53:14.213718 | orchestrator | Saturday 06 September 2025 00:50:54 +0000 (0:00:04.133) 0:08:44.162 **** 2025-09-06 00:53:14.213724 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.213729 | orchestrator | 2025-09-06 00:53:14.213735 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-06 00:53:14.213831 | orchestrator | Saturday 06 September 2025 00:50:56 +0000 (0:00:02.110) 0:08:46.272 **** 2025-09-06 00:53:14.213846 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.213857 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.213862 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.213868 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.213873 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.213879 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.213884 | orchestrator | 2025-09-06 00:53:14.213889 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-06 00:53:14.213895 | orchestrator | Saturday 06 September 2025 00:50:58 +0000 (0:00:01.993) 0:08:48.266 **** 2025-09-06 00:53:14.213900 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.213905 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.213911 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.213916 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.213921 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.213926 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.213932 | orchestrator | 2025-09-06 00:53:14.213937 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-06 00:53:14.213942 | orchestrator | Saturday 06 September 2025 00:51:00 +0000 (0:00:01.329) 0:08:49.595 **** 2025-09-06 00:53:14.213954 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.213960 | orchestrator | 2025-09-06 00:53:14.213965 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-06 00:53:14.213970 | orchestrator | Saturday 06 September 2025 00:51:01 +0000 (0:00:01.286) 0:08:50.882 **** 2025-09-06 00:53:14.213976 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.213981 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.213987 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.213992 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.213997 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.214002 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.214008 | orchestrator | 2025-09-06 00:53:14.214013 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-06 00:53:14.214040 | orchestrator | Saturday 06 September 2025 00:51:03 +0000 (0:00:01.765) 0:08:52.647 **** 2025-09-06 00:53:14.214045 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.214051 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.214056 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.214062 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.214067 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.214072 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.214078 | orchestrator | 2025-09-06 00:53:14.214083 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-06 00:53:14.214089 | orchestrator | Saturday 06 September 2025 00:51:07 +0000 (0:00:03.888) 0:08:56.536 **** 2025-09-06 00:53:14.214095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:53:14.214100 | orchestrator | 2025-09-06 00:53:14.214104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-06 00:53:14.214109 | orchestrator | Saturday 06 September 2025 00:51:08 +0000 (0:00:01.335) 0:08:57.871 **** 2025-09-06 00:53:14.214114 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214119 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214124 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214128 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.214133 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.214138 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.214143 | orchestrator | 2025-09-06 00:53:14.214147 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-06 00:53:14.214152 | orchestrator | Saturday 06 September 2025 00:51:09 +0000 (0:00:00.827) 0:08:58.698 **** 2025-09-06 00:53:14.214157 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.214162 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.214167 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.214171 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:53:14.214176 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:53:14.214181 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:53:14.214186 | orchestrator | 2025-09-06 00:53:14.214191 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-06 00:53:14.214202 | orchestrator | Saturday 06 September 2025 00:51:11 +0000 (0:00:02.202) 0:09:00.901 **** 2025-09-06 00:53:14.214207 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214212 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214217 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214222 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:53:14.214226 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:53:14.214231 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:53:14.214236 | orchestrator | 2025-09-06 00:53:14.214241 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-06 00:53:14.214246 | orchestrator | 2025-09-06 00:53:14.214250 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.214255 | orchestrator | Saturday 06 September 2025 00:51:12 +0000 (0:00:01.099) 0:09:02.001 **** 2025-09-06 00:53:14.214264 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.214269 | orchestrator | 2025-09-06 00:53:14.214274 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.214278 | orchestrator | Saturday 06 September 2025 00:51:13 +0000 (0:00:00.504) 0:09:02.505 **** 2025-09-06 00:53:14.214283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.214288 | orchestrator | 2025-09-06 00:53:14.214293 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.214298 | orchestrator | Saturday 06 September 2025 00:51:13 +0000 (0:00:00.767) 0:09:03.273 **** 2025-09-06 00:53:14.214302 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214307 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214312 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214317 | orchestrator | 2025-09-06 00:53:14.214325 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.214330 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.302) 0:09:03.575 **** 2025-09-06 00:53:14.214334 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214339 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214344 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214349 | orchestrator | 2025-09-06 00:53:14.214353 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.214358 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.684) 0:09:04.260 **** 2025-09-06 00:53:14.214363 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214368 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214373 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214377 | orchestrator | 2025-09-06 00:53:14.214382 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.214387 | orchestrator | Saturday 06 September 2025 00:51:15 +0000 (0:00:00.742) 0:09:05.002 **** 2025-09-06 00:53:14.214392 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214397 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214402 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214406 | orchestrator | 2025-09-06 00:53:14.214411 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.214416 | orchestrator | Saturday 06 September 2025 00:51:16 +0000 (0:00:00.919) 0:09:05.922 **** 2025-09-06 00:53:14.214421 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214426 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214431 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214435 | orchestrator | 2025-09-06 00:53:14.214440 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.214445 | orchestrator | Saturday 06 September 2025 00:51:16 +0000 (0:00:00.271) 0:09:06.193 **** 2025-09-06 00:53:14.214450 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214455 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214460 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214464 | orchestrator | 2025-09-06 00:53:14.214469 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.214474 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:00.262) 0:09:06.456 **** 2025-09-06 00:53:14.214479 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214483 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214488 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214493 | orchestrator | 2025-09-06 00:53:14.214498 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.214503 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:00.262) 0:09:06.719 **** 2025-09-06 00:53:14.214508 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214516 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214521 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214525 | orchestrator | 2025-09-06 00:53:14.214530 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.214535 | orchestrator | Saturday 06 September 2025 00:51:18 +0000 (0:00:00.853) 0:09:07.573 **** 2025-09-06 00:53:14.214540 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214545 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214549 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214554 | orchestrator | 2025-09-06 00:53:14.214559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.214564 | orchestrator | Saturday 06 September 2025 00:51:18 +0000 (0:00:00.624) 0:09:08.197 **** 2025-09-06 00:53:14.214568 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214573 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214578 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214583 | orchestrator | 2025-09-06 00:53:14.214587 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.214592 | orchestrator | Saturday 06 September 2025 00:51:19 +0000 (0:00:00.270) 0:09:08.468 **** 2025-09-06 00:53:14.214597 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214602 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214607 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214611 | orchestrator | 2025-09-06 00:53:14.214616 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.214621 | orchestrator | Saturday 06 September 2025 00:51:19 +0000 (0:00:00.275) 0:09:08.744 **** 2025-09-06 00:53:14.214626 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214631 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214638 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214643 | orchestrator | 2025-09-06 00:53:14.214648 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.214653 | orchestrator | Saturday 06 September 2025 00:51:19 +0000 (0:00:00.481) 0:09:09.225 **** 2025-09-06 00:53:14.214658 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214663 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214667 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214672 | orchestrator | 2025-09-06 00:53:14.214677 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.214682 | orchestrator | Saturday 06 September 2025 00:51:20 +0000 (0:00:00.325) 0:09:09.551 **** 2025-09-06 00:53:14.214687 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214691 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214696 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214701 | orchestrator | 2025-09-06 00:53:14.214706 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.214711 | orchestrator | Saturday 06 September 2025 00:51:20 +0000 (0:00:00.360) 0:09:09.912 **** 2025-09-06 00:53:14.214716 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214720 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214725 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214730 | orchestrator | 2025-09-06 00:53:14.214735 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.214754 | orchestrator | Saturday 06 September 2025 00:51:20 +0000 (0:00:00.280) 0:09:10.192 **** 2025-09-06 00:53:14.214759 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214764 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214768 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214773 | orchestrator | 2025-09-06 00:53:14.214778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.214786 | orchestrator | Saturday 06 September 2025 00:51:21 +0000 (0:00:00.449) 0:09:10.641 **** 2025-09-06 00:53:14.214791 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214796 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214800 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214809 | orchestrator | 2025-09-06 00:53:14.214814 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.214819 | orchestrator | Saturday 06 September 2025 00:51:21 +0000 (0:00:00.250) 0:09:10.891 **** 2025-09-06 00:53:14.214824 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214828 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214833 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214838 | orchestrator | 2025-09-06 00:53:14.214843 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.214848 | orchestrator | Saturday 06 September 2025 00:51:21 +0000 (0:00:00.298) 0:09:11.190 **** 2025-09-06 00:53:14.214852 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.214857 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.214862 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.214867 | orchestrator | 2025-09-06 00:53:14.214872 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-06 00:53:14.214876 | orchestrator | Saturday 06 September 2025 00:51:22 +0000 (0:00:00.644) 0:09:11.834 **** 2025-09-06 00:53:14.214881 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.214886 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.214891 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-06 00:53:14.214896 | orchestrator | 2025-09-06 00:53:14.214901 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-06 00:53:14.214906 | orchestrator | Saturday 06 September 2025 00:51:22 +0000 (0:00:00.366) 0:09:12.201 **** 2025-09-06 00:53:14.214910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.214915 | orchestrator | 2025-09-06 00:53:14.214920 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-06 00:53:14.214925 | orchestrator | Saturday 06 September 2025 00:51:24 +0000 (0:00:02.143) 0:09:14.344 **** 2025-09-06 00:53:14.214930 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-06 00:53:14.214937 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.214942 | orchestrator | 2025-09-06 00:53:14.214946 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-06 00:53:14.214951 | orchestrator | Saturday 06 September 2025 00:51:25 +0000 (0:00:00.254) 0:09:14.599 **** 2025-09-06 00:53:14.214957 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:53:14.214966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:53:14.214971 | orchestrator | 2025-09-06 00:53:14.214975 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-06 00:53:14.214980 | orchestrator | Saturday 06 September 2025 00:51:33 +0000 (0:00:07.895) 0:09:22.494 **** 2025-09-06 00:53:14.214985 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-06 00:53:14.214990 | orchestrator | 2025-09-06 00:53:14.214995 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-06 00:53:14.214999 | orchestrator | Saturday 06 September 2025 00:51:36 +0000 (0:00:03.795) 0:09:26.289 **** 2025-09-06 00:53:14.215007 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215012 | orchestrator | 2025-09-06 00:53:14.215017 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-06 00:53:14.215022 | orchestrator | Saturday 06 September 2025 00:51:38 +0000 (0:00:01.151) 0:09:27.441 **** 2025-09-06 00:53:14.215030 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-06 00:53:14.215034 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-06 00:53:14.215039 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-06 00:53:14.215044 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-06 00:53:14.215049 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-06 00:53:14.215054 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-06 00:53:14.215059 | orchestrator | 2025-09-06 00:53:14.215064 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-06 00:53:14.215068 | orchestrator | Saturday 06 September 2025 00:51:39 +0000 (0:00:01.010) 0:09:28.451 **** 2025-09-06 00:53:14.215073 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.215078 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.215083 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.215088 | orchestrator | 2025-09-06 00:53:14.215092 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-06 00:53:14.215097 | orchestrator | Saturday 06 September 2025 00:51:41 +0000 (0:00:02.156) 0:09:30.608 **** 2025-09-06 00:53:14.215102 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 00:53:14.215109 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.215114 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215119 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 00:53:14.215124 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-06 00:53:14.215129 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215134 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 00:53:14.215138 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-06 00:53:14.215143 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215148 | orchestrator | 2025-09-06 00:53:14.215153 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-06 00:53:14.215157 | orchestrator | Saturday 06 September 2025 00:51:42 +0000 (0:00:01.442) 0:09:32.050 **** 2025-09-06 00:53:14.215162 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215167 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215172 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215176 | orchestrator | 2025-09-06 00:53:14.215181 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-06 00:53:14.215186 | orchestrator | Saturday 06 September 2025 00:51:45 +0000 (0:00:02.975) 0:09:35.026 **** 2025-09-06 00:53:14.215191 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215196 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215200 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215205 | orchestrator | 2025-09-06 00:53:14.215210 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-06 00:53:14.215215 | orchestrator | Saturday 06 September 2025 00:51:46 +0000 (0:00:00.384) 0:09:35.410 **** 2025-09-06 00:53:14.215220 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215224 | orchestrator | 2025-09-06 00:53:14.215229 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-06 00:53:14.215234 | orchestrator | Saturday 06 September 2025 00:51:46 +0000 (0:00:00.609) 0:09:36.019 **** 2025-09-06 00:53:14.215239 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215244 | orchestrator | 2025-09-06 00:53:14.215249 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-06 00:53:14.215253 | orchestrator | Saturday 06 September 2025 00:51:47 +0000 (0:00:00.952) 0:09:36.972 **** 2025-09-06 00:53:14.215264 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215269 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215273 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215278 | orchestrator | 2025-09-06 00:53:14.215283 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-06 00:53:14.215288 | orchestrator | Saturday 06 September 2025 00:51:48 +0000 (0:00:01.219) 0:09:38.192 **** 2025-09-06 00:53:14.215293 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215297 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215302 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215307 | orchestrator | 2025-09-06 00:53:14.215312 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-06 00:53:14.215317 | orchestrator | Saturday 06 September 2025 00:51:49 +0000 (0:00:01.204) 0:09:39.396 **** 2025-09-06 00:53:14.215321 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215326 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215331 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215336 | orchestrator | 2025-09-06 00:53:14.215340 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-06 00:53:14.215345 | orchestrator | Saturday 06 September 2025 00:51:52 +0000 (0:00:02.035) 0:09:41.431 **** 2025-09-06 00:53:14.215350 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215355 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215360 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215365 | orchestrator | 2025-09-06 00:53:14.215369 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-06 00:53:14.215374 | orchestrator | Saturday 06 September 2025 00:51:54 +0000 (0:00:02.099) 0:09:43.530 **** 2025-09-06 00:53:14.215379 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215384 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215389 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215394 | orchestrator | 2025-09-06 00:53:14.215401 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.215406 | orchestrator | Saturday 06 September 2025 00:51:55 +0000 (0:00:01.605) 0:09:45.136 **** 2025-09-06 00:53:14.215411 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215416 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215421 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215426 | orchestrator | 2025-09-06 00:53:14.215430 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-06 00:53:14.215435 | orchestrator | Saturday 06 September 2025 00:51:56 +0000 (0:00:00.682) 0:09:45.818 **** 2025-09-06 00:53:14.215440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215445 | orchestrator | 2025-09-06 00:53:14.215450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-06 00:53:14.215455 | orchestrator | Saturday 06 September 2025 00:51:56 +0000 (0:00:00.571) 0:09:46.390 **** 2025-09-06 00:53:14.215460 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215465 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215469 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215474 | orchestrator | 2025-09-06 00:53:14.215479 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-06 00:53:14.215484 | orchestrator | Saturday 06 September 2025 00:51:57 +0000 (0:00:00.601) 0:09:46.992 **** 2025-09-06 00:53:14.215489 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.215493 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.215498 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.215503 | orchestrator | 2025-09-06 00:53:14.215508 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-06 00:53:14.215513 | orchestrator | Saturday 06 September 2025 00:51:58 +0000 (0:00:01.270) 0:09:48.262 **** 2025-09-06 00:53:14.215520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.215529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.215534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.215538 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215543 | orchestrator | 2025-09-06 00:53:14.215548 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-06 00:53:14.215553 | orchestrator | Saturday 06 September 2025 00:51:59 +0000 (0:00:00.617) 0:09:48.880 **** 2025-09-06 00:53:14.215558 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215563 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215567 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215572 | orchestrator | 2025-09-06 00:53:14.215577 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-06 00:53:14.215582 | orchestrator | 2025-09-06 00:53:14.215587 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-06 00:53:14.215592 | orchestrator | Saturday 06 September 2025 00:52:00 +0000 (0:00:00.824) 0:09:49.704 **** 2025-09-06 00:53:14.215596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215601 | orchestrator | 2025-09-06 00:53:14.215606 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-06 00:53:14.215611 | orchestrator | Saturday 06 September 2025 00:52:00 +0000 (0:00:00.532) 0:09:50.237 **** 2025-09-06 00:53:14.215616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.215621 | orchestrator | 2025-09-06 00:53:14.215626 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-06 00:53:14.215631 | orchestrator | Saturday 06 September 2025 00:52:01 +0000 (0:00:00.755) 0:09:50.993 **** 2025-09-06 00:53:14.215635 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215640 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215645 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215650 | orchestrator | 2025-09-06 00:53:14.215655 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-06 00:53:14.215659 | orchestrator | Saturday 06 September 2025 00:52:01 +0000 (0:00:00.339) 0:09:51.333 **** 2025-09-06 00:53:14.215664 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215669 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215674 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215679 | orchestrator | 2025-09-06 00:53:14.215683 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-06 00:53:14.215689 | orchestrator | Saturday 06 September 2025 00:52:02 +0000 (0:00:00.758) 0:09:52.092 **** 2025-09-06 00:53:14.215694 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215699 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215704 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215709 | orchestrator | 2025-09-06 00:53:14.215713 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-06 00:53:14.215718 | orchestrator | Saturday 06 September 2025 00:52:03 +0000 (0:00:00.746) 0:09:52.838 **** 2025-09-06 00:53:14.215723 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215728 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215733 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215750 | orchestrator | 2025-09-06 00:53:14.215755 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-06 00:53:14.215760 | orchestrator | Saturday 06 September 2025 00:52:04 +0000 (0:00:00.987) 0:09:53.825 **** 2025-09-06 00:53:14.215765 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215770 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215775 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215779 | orchestrator | 2025-09-06 00:53:14.215784 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-06 00:53:14.215789 | orchestrator | Saturday 06 September 2025 00:52:04 +0000 (0:00:00.319) 0:09:54.145 **** 2025-09-06 00:53:14.215798 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215803 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215807 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215812 | orchestrator | 2025-09-06 00:53:14.215817 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-06 00:53:14.215825 | orchestrator | Saturday 06 September 2025 00:52:05 +0000 (0:00:00.343) 0:09:54.488 **** 2025-09-06 00:53:14.215830 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215835 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215839 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215844 | orchestrator | 2025-09-06 00:53:14.215849 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-06 00:53:14.215854 | orchestrator | Saturday 06 September 2025 00:52:05 +0000 (0:00:00.334) 0:09:54.822 **** 2025-09-06 00:53:14.215859 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215863 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215868 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215873 | orchestrator | 2025-09-06 00:53:14.215878 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-06 00:53:14.215883 | orchestrator | Saturday 06 September 2025 00:52:06 +0000 (0:00:00.976) 0:09:55.799 **** 2025-09-06 00:53:14.215888 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215893 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215898 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215903 | orchestrator | 2025-09-06 00:53:14.215908 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-06 00:53:14.215912 | orchestrator | Saturday 06 September 2025 00:52:07 +0000 (0:00:00.741) 0:09:56.540 **** 2025-09-06 00:53:14.215917 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215922 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215927 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215932 | orchestrator | 2025-09-06 00:53:14.215937 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-06 00:53:14.215942 | orchestrator | Saturday 06 September 2025 00:52:07 +0000 (0:00:00.302) 0:09:56.843 **** 2025-09-06 00:53:14.215947 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.215952 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.215959 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.215964 | orchestrator | 2025-09-06 00:53:14.215969 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-06 00:53:14.215974 | orchestrator | Saturday 06 September 2025 00:52:07 +0000 (0:00:00.292) 0:09:57.136 **** 2025-09-06 00:53:14.215979 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.215984 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.215989 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.215994 | orchestrator | 2025-09-06 00:53:14.215998 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-06 00:53:14.216003 | orchestrator | Saturday 06 September 2025 00:52:08 +0000 (0:00:00.612) 0:09:57.749 **** 2025-09-06 00:53:14.216008 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.216013 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.216018 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.216022 | orchestrator | 2025-09-06 00:53:14.216027 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-06 00:53:14.216032 | orchestrator | Saturday 06 September 2025 00:52:08 +0000 (0:00:00.339) 0:09:58.089 **** 2025-09-06 00:53:14.216037 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.216042 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.216046 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.216051 | orchestrator | 2025-09-06 00:53:14.216056 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-06 00:53:14.216061 | orchestrator | Saturday 06 September 2025 00:52:09 +0000 (0:00:00.321) 0:09:58.410 **** 2025-09-06 00:53:14.216066 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216070 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216080 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216085 | orchestrator | 2025-09-06 00:53:14.216090 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-06 00:53:14.216095 | orchestrator | Saturday 06 September 2025 00:52:09 +0000 (0:00:00.306) 0:09:58.716 **** 2025-09-06 00:53:14.216099 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216104 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216109 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216114 | orchestrator | 2025-09-06 00:53:14.216119 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-06 00:53:14.216123 | orchestrator | Saturday 06 September 2025 00:52:09 +0000 (0:00:00.555) 0:09:59.272 **** 2025-09-06 00:53:14.216128 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216133 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216138 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216142 | orchestrator | 2025-09-06 00:53:14.216147 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-06 00:53:14.216152 | orchestrator | Saturday 06 September 2025 00:52:10 +0000 (0:00:00.358) 0:09:59.631 **** 2025-09-06 00:53:14.216157 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.216162 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.216167 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.216172 | orchestrator | 2025-09-06 00:53:14.216176 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-06 00:53:14.216181 | orchestrator | Saturday 06 September 2025 00:52:10 +0000 (0:00:00.347) 0:09:59.979 **** 2025-09-06 00:53:14.216186 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.216191 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.216196 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.216201 | orchestrator | 2025-09-06 00:53:14.216205 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-06 00:53:14.216210 | orchestrator | Saturday 06 September 2025 00:52:11 +0000 (0:00:00.768) 0:10:00.747 **** 2025-09-06 00:53:14.216215 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.216220 | orchestrator | 2025-09-06 00:53:14.216225 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-06 00:53:14.216230 | orchestrator | Saturday 06 September 2025 00:52:11 +0000 (0:00:00.521) 0:10:01.268 **** 2025-09-06 00:53:14.216235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216240 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.216244 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.216249 | orchestrator | 2025-09-06 00:53:14.216257 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-06 00:53:14.216262 | orchestrator | Saturday 06 September 2025 00:52:14 +0000 (0:00:02.332) 0:10:03.601 **** 2025-09-06 00:53:14.216266 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 00:53:14.216271 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-06 00:53:14.216276 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.216281 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 00:53:14.216286 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-06 00:53:14.216291 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.216295 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 00:53:14.216300 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-06 00:53:14.216305 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.216310 | orchestrator | 2025-09-06 00:53:14.216315 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-06 00:53:14.216320 | orchestrator | Saturday 06 September 2025 00:52:15 +0000 (0:00:01.221) 0:10:04.822 **** 2025-09-06 00:53:14.216324 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216329 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216339 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216344 | orchestrator | 2025-09-06 00:53:14.216348 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-06 00:53:14.216353 | orchestrator | Saturday 06 September 2025 00:52:16 +0000 (0:00:00.581) 0:10:05.403 **** 2025-09-06 00:53:14.216358 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.216363 | orchestrator | 2025-09-06 00:53:14.216368 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-06 00:53:14.216375 | orchestrator | Saturday 06 September 2025 00:52:16 +0000 (0:00:00.539) 0:10:05.943 **** 2025-09-06 00:53:14.216380 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216385 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216395 | orchestrator | 2025-09-06 00:53:14.216400 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-06 00:53:14.216405 | orchestrator | Saturday 06 September 2025 00:52:17 +0000 (0:00:00.813) 0:10:06.757 **** 2025-09-06 00:53:14.216410 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216415 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-06 00:53:14.216420 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216425 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-06 00:53:14.216430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216435 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-06 00:53:14.216440 | orchestrator | 2025-09-06 00:53:14.216445 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-06 00:53:14.216450 | orchestrator | Saturday 06 September 2025 00:52:22 +0000 (0:00:05.024) 0:10:11.782 **** 2025-09-06 00:53:14.216454 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216459 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.216464 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216469 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.216473 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:53:14.216478 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:53:14.216483 | orchestrator | 2025-09-06 00:53:14.216488 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-06 00:53:14.216493 | orchestrator | Saturday 06 September 2025 00:52:24 +0000 (0:00:02.292) 0:10:14.074 **** 2025-09-06 00:53:14.216498 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 00:53:14.216502 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.216507 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 00:53:14.216512 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.216517 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 00:53:14.216522 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.216526 | orchestrator | 2025-09-06 00:53:14.216531 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-06 00:53:14.216540 | orchestrator | Saturday 06 September 2025 00:52:25 +0000 (0:00:01.228) 0:10:15.303 **** 2025-09-06 00:53:14.216545 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-06 00:53:14.216550 | orchestrator | 2025-09-06 00:53:14.216555 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-06 00:53:14.216560 | orchestrator | Saturday 06 September 2025 00:52:26 +0000 (0:00:00.250) 0:10:15.554 **** 2025-09-06 00:53:14.216567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216593 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216597 | orchestrator | 2025-09-06 00:53:14.216602 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-06 00:53:14.216607 | orchestrator | Saturday 06 September 2025 00:52:27 +0000 (0:00:00.858) 0:10:16.412 **** 2025-09-06 00:53:14.216612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-06 00:53:14.216639 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216644 | orchestrator | 2025-09-06 00:53:14.216649 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-06 00:53:14.216654 | orchestrator | Saturday 06 September 2025 00:52:27 +0000 (0:00:00.833) 0:10:17.246 **** 2025-09-06 00:53:14.216659 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-06 00:53:14.216664 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-06 00:53:14.216669 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-06 00:53:14.216674 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-06 00:53:14.216679 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-06 00:53:14.216684 | orchestrator | 2025-09-06 00:53:14.216689 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-06 00:53:14.216694 | orchestrator | Saturday 06 September 2025 00:52:59 +0000 (0:00:32.020) 0:10:49.266 **** 2025-09-06 00:53:14.216698 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216707 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216712 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216716 | orchestrator | 2025-09-06 00:53:14.216721 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-06 00:53:14.216726 | orchestrator | Saturday 06 September 2025 00:53:00 +0000 (0:00:00.651) 0:10:49.917 **** 2025-09-06 00:53:14.216731 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216736 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216759 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216764 | orchestrator | 2025-09-06 00:53:14.216769 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-06 00:53:14.216774 | orchestrator | Saturday 06 September 2025 00:53:00 +0000 (0:00:00.310) 0:10:50.228 **** 2025-09-06 00:53:14.216779 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.216784 | orchestrator | 2025-09-06 00:53:14.216789 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-06 00:53:14.216794 | orchestrator | Saturday 06 September 2025 00:53:01 +0000 (0:00:00.573) 0:10:50.801 **** 2025-09-06 00:53:14.216799 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.216804 | orchestrator | 2025-09-06 00:53:14.216809 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-06 00:53:14.216814 | orchestrator | Saturday 06 September 2025 00:53:02 +0000 (0:00:00.822) 0:10:51.624 **** 2025-09-06 00:53:14.216818 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.216823 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.216828 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.216833 | orchestrator | 2025-09-06 00:53:14.216838 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-06 00:53:14.216846 | orchestrator | Saturday 06 September 2025 00:53:03 +0000 (0:00:01.301) 0:10:52.926 **** 2025-09-06 00:53:14.216852 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.216857 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.216861 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.216866 | orchestrator | 2025-09-06 00:53:14.216871 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-06 00:53:14.216876 | orchestrator | Saturday 06 September 2025 00:53:04 +0000 (0:00:01.241) 0:10:54.167 **** 2025-09-06 00:53:14.216881 | orchestrator | changed: [testbed-node-3] 2025-09-06 00:53:14.216886 | orchestrator | changed: [testbed-node-4] 2025-09-06 00:53:14.216891 | orchestrator | changed: [testbed-node-5] 2025-09-06 00:53:14.216895 | orchestrator | 2025-09-06 00:53:14.216900 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-06 00:53:14.216905 | orchestrator | Saturday 06 September 2025 00:53:06 +0000 (0:00:02.141) 0:10:56.308 **** 2025-09-06 00:53:14.216910 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216915 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216920 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-06 00:53:14.216925 | orchestrator | 2025-09-06 00:53:14.216929 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-06 00:53:14.216934 | orchestrator | Saturday 06 September 2025 00:53:09 +0000 (0:00:02.416) 0:10:58.725 **** 2025-09-06 00:53:14.216939 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.216947 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.216952 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.216957 | orchestrator | 2025-09-06 00:53:14.216962 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-06 00:53:14.216971 | orchestrator | Saturday 06 September 2025 00:53:09 +0000 (0:00:00.630) 0:10:59.356 **** 2025-09-06 00:53:14.216975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:53:14.216980 | orchestrator | 2025-09-06 00:53:14.216985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-06 00:53:14.216990 | orchestrator | Saturday 06 September 2025 00:53:10 +0000 (0:00:00.580) 0:10:59.937 **** 2025-09-06 00:53:14.216995 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.217000 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.217004 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.217009 | orchestrator | 2025-09-06 00:53:14.217014 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-06 00:53:14.217019 | orchestrator | Saturday 06 September 2025 00:53:10 +0000 (0:00:00.321) 0:11:00.258 **** 2025-09-06 00:53:14.217024 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.217028 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:53:14.217033 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:53:14.217038 | orchestrator | 2025-09-06 00:53:14.217043 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-06 00:53:14.217048 | orchestrator | Saturday 06 September 2025 00:53:11 +0000 (0:00:00.617) 0:11:00.876 **** 2025-09-06 00:53:14.217053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:53:14.217058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:53:14.217062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:53:14.217067 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:53:14.217072 | orchestrator | 2025-09-06 00:53:14.217077 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-06 00:53:14.217081 | orchestrator | Saturday 06 September 2025 00:53:12 +0000 (0:00:00.605) 0:11:01.481 **** 2025-09-06 00:53:14.217086 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:53:14.217091 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:53:14.217096 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:53:14.217101 | orchestrator | 2025-09-06 00:53:14.217105 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:53:14.217110 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-06 00:53:14.217115 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-06 00:53:14.217120 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-06 00:53:14.217125 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-06 00:53:14.217130 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-06 00:53:14.217135 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-06 00:53:14.217140 | orchestrator | 2025-09-06 00:53:14.217145 | orchestrator | 2025-09-06 00:53:14.217149 | orchestrator | 2025-09-06 00:53:14.217154 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:53:14.217159 | orchestrator | Saturday 06 September 2025 00:53:12 +0000 (0:00:00.263) 0:11:01.745 **** 2025-09-06 00:53:14.217164 | orchestrator | =============================================================================== 2025-09-06 00:53:14.217171 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 86.72s 2025-09-06 00:53:14.217176 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.92s 2025-09-06 00:53:14.217185 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.02s 2025-09-06 00:53:14.217190 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.48s 2025-09-06 00:53:14.217195 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.32s 2025-09-06 00:53:14.217199 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.55s 2025-09-06 00:53:14.217204 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.52s 2025-09-06 00:53:14.217209 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.72s 2025-09-06 00:53:14.217214 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.90s 2025-09-06 00:53:14.217219 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.40s 2025-09-06 00:53:14.217223 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.87s 2025-09-06 00:53:14.217228 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.14s 2025-09-06 00:53:14.217233 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.02s 2025-09-06 00:53:14.217238 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.27s 2025-09-06 00:53:14.217242 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2025-09-06 00:53:14.217250 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.95s 2025-09-06 00:53:14.217255 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.89s 2025-09-06 00:53:14.217259 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.80s 2025-09-06 00:53:14.217264 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.38s 2025-09-06 00:53:14.217269 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.18s 2025-09-06 00:53:14.217274 | orchestrator | 2025-09-06 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:17.237529 | orchestrator | 2025-09-06 00:53:17 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:17.238105 | orchestrator | 2025-09-06 00:53:17 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:17.239390 | orchestrator | 2025-09-06 00:53:17 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:17.239416 | orchestrator | 2025-09-06 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:20.278998 | orchestrator | 2025-09-06 00:53:20 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:20.279286 | orchestrator | 2025-09-06 00:53:20 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:20.281058 | orchestrator | 2025-09-06 00:53:20 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:20.281140 | orchestrator | 2025-09-06 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:23.327306 | orchestrator | 2025-09-06 00:53:23 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:23.329204 | orchestrator | 2025-09-06 00:53:23 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:23.330635 | orchestrator | 2025-09-06 00:53:23 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:23.331078 | orchestrator | 2025-09-06 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:26.376696 | orchestrator | 2025-09-06 00:53:26 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:26.378314 | orchestrator | 2025-09-06 00:53:26 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:26.380620 | orchestrator | 2025-09-06 00:53:26 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:26.380930 | orchestrator | 2025-09-06 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:29.420794 | orchestrator | 2025-09-06 00:53:29 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:29.421005 | orchestrator | 2025-09-06 00:53:29 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:29.422183 | orchestrator | 2025-09-06 00:53:29 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:29.422210 | orchestrator | 2025-09-06 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:32.467702 | orchestrator | 2025-09-06 00:53:32 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:32.469591 | orchestrator | 2025-09-06 00:53:32 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:32.470961 | orchestrator | 2025-09-06 00:53:32 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:32.470985 | orchestrator | 2025-09-06 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:35.518267 | orchestrator | 2025-09-06 00:53:35 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:35.518364 | orchestrator | 2025-09-06 00:53:35 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:35.518377 | orchestrator | 2025-09-06 00:53:35 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:35.518387 | orchestrator | 2025-09-06 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:38.561501 | orchestrator | 2025-09-06 00:53:38 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:38.563429 | orchestrator | 2025-09-06 00:53:38 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:38.565861 | orchestrator | 2025-09-06 00:53:38 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:38.565888 | orchestrator | 2025-09-06 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:41.610767 | orchestrator | 2025-09-06 00:53:41 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:41.613815 | orchestrator | 2025-09-06 00:53:41 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:41.615998 | orchestrator | 2025-09-06 00:53:41 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:41.616025 | orchestrator | 2025-09-06 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:44.662529 | orchestrator | 2025-09-06 00:53:44 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:44.664357 | orchestrator | 2025-09-06 00:53:44 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:44.667057 | orchestrator | 2025-09-06 00:53:44 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:44.667550 | orchestrator | 2025-09-06 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:47.712018 | orchestrator | 2025-09-06 00:53:47 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:47.713241 | orchestrator | 2025-09-06 00:53:47 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:47.715331 | orchestrator | 2025-09-06 00:53:47 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:47.715837 | orchestrator | 2025-09-06 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:50.761298 | orchestrator | 2025-09-06 00:53:50 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:50.764149 | orchestrator | 2025-09-06 00:53:50 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:50.765873 | orchestrator | 2025-09-06 00:53:50 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:50.766097 | orchestrator | 2025-09-06 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:53.822338 | orchestrator | 2025-09-06 00:53:53 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:53.824268 | orchestrator | 2025-09-06 00:53:53 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:53.825839 | orchestrator | 2025-09-06 00:53:53 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:53.825862 | orchestrator | 2025-09-06 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:56.884889 | orchestrator | 2025-09-06 00:53:56 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:56.886300 | orchestrator | 2025-09-06 00:53:56 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:56.887705 | orchestrator | 2025-09-06 00:53:56 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:56.887733 | orchestrator | 2025-09-06 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:53:59.941208 | orchestrator | 2025-09-06 00:53:59 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:53:59.942423 | orchestrator | 2025-09-06 00:53:59 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:53:59.943541 | orchestrator | 2025-09-06 00:53:59 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:53:59.943565 | orchestrator | 2025-09-06 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:02.985908 | orchestrator | 2025-09-06 00:54:02 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:02.987196 | orchestrator | 2025-09-06 00:54:02 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:54:02.988608 | orchestrator | 2025-09-06 00:54:02 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:02.988634 | orchestrator | 2025-09-06 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:06.028938 | orchestrator | 2025-09-06 00:54:06 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:06.030133 | orchestrator | 2025-09-06 00:54:06 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:54:06.030327 | orchestrator | 2025-09-06 00:54:06 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:06.030348 | orchestrator | 2025-09-06 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:09.075382 | orchestrator | 2025-09-06 00:54:09 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:09.077384 | orchestrator | 2025-09-06 00:54:09 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:54:09.079096 | orchestrator | 2025-09-06 00:54:09 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:09.079560 | orchestrator | 2025-09-06 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:12.124812 | orchestrator | 2025-09-06 00:54:12 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:12.126743 | orchestrator | 2025-09-06 00:54:12 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:54:12.128223 | orchestrator | 2025-09-06 00:54:12 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:12.128715 | orchestrator | 2025-09-06 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:15.169210 | orchestrator | 2025-09-06 00:54:15 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:15.170497 | orchestrator | 2025-09-06 00:54:15 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state STARTED 2025-09-06 00:54:15.172327 | orchestrator | 2025-09-06 00:54:15 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:15.173393 | orchestrator | 2025-09-06 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:18.210722 | orchestrator | 2025-09-06 00:54:18 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:18.215409 | orchestrator | 2025-09-06 00:54:18.215604 | orchestrator | 2025-09-06 00:54:18.215623 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:54:18.215693 | orchestrator | 2025-09-06 00:54:18.215706 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:54:18.215717 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.198) 0:00:00.198 **** 2025-09-06 00:54:18.215728 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:18.215741 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:18.215751 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:18.215762 | orchestrator | 2025-09-06 00:54:18.215774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:54:18.215785 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.274) 0:00:00.473 **** 2025-09-06 00:54:18.215797 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-06 00:54:18.215808 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-06 00:54:18.215819 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-06 00:54:18.215829 | orchestrator | 2025-09-06 00:54:18.215840 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-06 00:54:18.215851 | orchestrator | 2025-09-06 00:54:18.215862 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-06 00:54:18.215872 | orchestrator | Saturday 06 September 2025 00:51:15 +0000 (0:00:00.353) 0:00:00.827 **** 2025-09-06 00:54:18.215883 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:18.215894 | orchestrator | 2025-09-06 00:54:18.215905 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-06 00:54:18.215915 | orchestrator | Saturday 06 September 2025 00:51:15 +0000 (0:00:00.488) 0:00:01.315 **** 2025-09-06 00:54:18.215926 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:54:18.215937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:54:18.215947 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-06 00:54:18.215958 | orchestrator | 2025-09-06 00:54:18.215968 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-06 00:54:18.215979 | orchestrator | Saturday 06 September 2025 00:51:16 +0000 (0:00:00.632) 0:00:01.948 **** 2025-09-06 00:54:18.215994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216142 | orchestrator | 2025-09-06 00:54:18.216153 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-06 00:54:18.216164 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:01.553) 0:00:03.502 **** 2025-09-06 00:54:18.216175 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:18.216193 | orchestrator | 2025-09-06 00:54:18.216211 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-06 00:54:18.216230 | orchestrator | Saturday 06 September 2025 00:51:18 +0000 (0:00:00.452) 0:00:03.954 **** 2025-09-06 00:54:18.216263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216404 | orchestrator | 2025-09-06 00:54:18.216415 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-06 00:54:18.216434 | orchestrator | Saturday 06 September 2025 00:51:20 +0000 (0:00:02.583) 0:00:06.538 **** 2025-09-06 00:54:18.216445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216474 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:18.216485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216518 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:18.216537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216566 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:18.216577 | orchestrator | 2025-09-06 00:54:18.216587 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-06 00:54:18.216598 | orchestrator | Saturday 06 September 2025 00:51:21 +0000 (0:00:01.126) 0:00:07.665 **** 2025-09-06 00:54:18.216610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216675 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:18.216689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216718 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:18.216729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-06 00:54:18.216750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-06 00:54:18.216769 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:18.216780 | orchestrator | 2025-09-06 00:54:18.216791 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-06 00:54:18.216802 | orchestrator | Saturday 06 September 2025 00:51:23 +0000 (0:00:01.118) 0:00:08.783 **** 2025-09-06 00:54:18.216813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.216860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.216911 | orchestrator | 2025-09-06 00:54:18.216922 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-06 00:54:18.216938 | orchestrator | Saturday 06 September 2025 00:51:25 +0000 (0:00:02.246) 0:00:11.030 **** 2025-09-06 00:54:18.216949 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.216960 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:18.216971 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:18.216981 | orchestrator | 2025-09-06 00:54:18.216992 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-06 00:54:18.217003 | orchestrator | Saturday 06 September 2025 00:51:27 +0000 (0:00:02.602) 0:00:13.632 **** 2025-09-06 00:54:18.217014 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.217024 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:18.217035 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:18.217046 | orchestrator | 2025-09-06 00:54:18.217056 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-06 00:54:18.217067 | orchestrator | Saturday 06 September 2025 00:51:29 +0000 (0:00:01.701) 0:00:15.333 **** 2025-09-06 00:54:18.217078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.217104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'co2025-09-06 00:54:18 | INFO  | Task dd18d3d0-7e76-4c30-abcd-1e3919c2bc08 is in state SUCCESS 2025-09-06 00:54:18.217117 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.217130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-06 00:54:18.217146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.217159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.217188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-06 00:54:18.217200 | orchestrator | 2025-09-06 00:54:18.217211 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-06 00:54:18.217223 | orchestrator | Saturday 06 September 2025 00:51:31 +0000 (0:00:01.965) 0:00:17.299 **** 2025-09-06 00:54:18.217243 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:18.217263 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:18.217282 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:18.217302 | orchestrator | 2025-09-06 00:54:18.217322 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-06 00:54:18.217340 | orchestrator | Saturday 06 September 2025 00:51:31 +0000 (0:00:00.278) 0:00:17.577 **** 2025-09-06 00:54:18.217352 | orchestrator | 2025-09-06 00:54:18.217363 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-06 00:54:18.217373 | orchestrator | Saturday 06 September 2025 00:51:31 +0000 (0:00:00.061) 0:00:17.638 **** 2025-09-06 00:54:18.217384 | orchestrator | 2025-09-06 00:54:18.217394 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-06 00:54:18.217405 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:00.064) 0:00:17.702 **** 2025-09-06 00:54:18.217415 | orchestrator | 2025-09-06 00:54:18.217426 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-06 00:54:18.217437 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:00.066) 0:00:17.769 **** 2025-09-06 00:54:18.217447 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:18.217458 | orchestrator | 2025-09-06 00:54:18.217469 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-06 00:54:18.217479 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:00.204) 0:00:17.974 **** 2025-09-06 00:54:18.217490 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:18.217500 | orchestrator | 2025-09-06 00:54:18.217511 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-06 00:54:18.217522 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:00.655) 0:00:18.630 **** 2025-09-06 00:54:18.217532 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.217543 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:18.217553 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:18.217564 | orchestrator | 2025-09-06 00:54:18.217574 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-06 00:54:18.217585 | orchestrator | Saturday 06 September 2025 00:52:39 +0000 (0:01:06.579) 0:01:25.210 **** 2025-09-06 00:54:18.217595 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.217606 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:18.217616 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:18.217627 | orchestrator | 2025-09-06 00:54:18.217637 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-06 00:54:18.217679 | orchestrator | Saturday 06 September 2025 00:54:04 +0000 (0:01:24.698) 0:02:49.908 **** 2025-09-06 00:54:18.217691 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:18.217710 | orchestrator | 2025-09-06 00:54:18.217721 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-06 00:54:18.217732 | orchestrator | Saturday 06 September 2025 00:54:04 +0000 (0:00:00.520) 0:02:50.428 **** 2025-09-06 00:54:18.217743 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:18.217753 | orchestrator | 2025-09-06 00:54:18.217764 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-06 00:54:18.217775 | orchestrator | Saturday 06 September 2025 00:54:07 +0000 (0:00:02.743) 0:02:53.172 **** 2025-09-06 00:54:18.217786 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:18.217796 | orchestrator | 2025-09-06 00:54:18.217807 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-06 00:54:18.217818 | orchestrator | Saturday 06 September 2025 00:54:09 +0000 (0:00:02.243) 0:02:55.416 **** 2025-09-06 00:54:18.217829 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.217839 | orchestrator | 2025-09-06 00:54:18.217850 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-06 00:54:18.217861 | orchestrator | Saturday 06 September 2025 00:54:12 +0000 (0:00:02.696) 0:02:58.113 **** 2025-09-06 00:54:18.217872 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:18.217882 | orchestrator | 2025-09-06 00:54:18.217893 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:54:18.217905 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 00:54:18.217917 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:54:18.217936 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-06 00:54:18.217948 | orchestrator | 2025-09-06 00:54:18.217958 | orchestrator | 2025-09-06 00:54:18.217969 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:54:18.217980 | orchestrator | Saturday 06 September 2025 00:54:15 +0000 (0:00:02.656) 0:03:00.769 **** 2025-09-06 00:54:18.217991 | orchestrator | =============================================================================== 2025-09-06 00:54:18.218002 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.70s 2025-09-06 00:54:18.218093 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.58s 2025-09-06 00:54:18.218110 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.74s 2025-09-06 00:54:18.218121 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-09-06 00:54:18.218132 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.66s 2025-09-06 00:54:18.218142 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.60s 2025-09-06 00:54:18.218153 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.59s 2025-09-06 00:54:18.218164 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.25s 2025-09-06 00:54:18.218174 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.24s 2025-09-06 00:54:18.218185 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.97s 2025-09-06 00:54:18.218196 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.70s 2025-09-06 00:54:18.218206 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.55s 2025-09-06 00:54:18.218217 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.12s 2025-09-06 00:54:18.218228 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.12s 2025-09-06 00:54:18.218238 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.66s 2025-09-06 00:54:18.218257 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.63s 2025-09-06 00:54:18.218268 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-06 00:54:18.218284 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-09-06 00:54:18.218304 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2025-09-06 00:54:18.218324 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-09-06 00:54:18.218344 | orchestrator | 2025-09-06 00:54:18 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:18.218359 | orchestrator | 2025-09-06 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:21.258277 | orchestrator | 2025-09-06 00:54:21 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state STARTED 2025-09-06 00:54:21.260038 | orchestrator | 2025-09-06 00:54:21 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:21.260352 | orchestrator | 2025-09-06 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:24.304844 | orchestrator | 2025-09-06 00:54:24 | INFO  | Task e6eb30b9-f164-4ec9-ad21-57ae04d89d4b is in state SUCCESS 2025-09-06 00:54:24.306612 | orchestrator | 2025-09-06 00:54:24.306739 | orchestrator | 2025-09-06 00:54:24.306772 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-06 00:54:24.306784 | orchestrator | 2025-09-06 00:54:24.306796 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-06 00:54:24.306807 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.097) 0:00:00.097 **** 2025-09-06 00:54:24.306818 | orchestrator | ok: [localhost] => { 2025-09-06 00:54:24.306831 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-06 00:54:24.306842 | orchestrator | } 2025-09-06 00:54:24.306853 | orchestrator | 2025-09-06 00:54:24.306864 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-06 00:54:24.306875 | orchestrator | Saturday 06 September 2025 00:51:14 +0000 (0:00:00.027) 0:00:00.125 **** 2025-09-06 00:54:24.306886 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-06 00:54:24.306899 | orchestrator | ...ignoring 2025-09-06 00:54:24.306911 | orchestrator | 2025-09-06 00:54:24.306922 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-06 00:54:24.306933 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:02.737) 0:00:02.862 **** 2025-09-06 00:54:24.307171 | orchestrator | skipping: [localhost] 2025-09-06 00:54:24.307184 | orchestrator | 2025-09-06 00:54:24.307195 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-06 00:54:24.307206 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:00.041) 0:00:02.903 **** 2025-09-06 00:54:24.307217 | orchestrator | ok: [localhost] 2025-09-06 00:54:24.307228 | orchestrator | 2025-09-06 00:54:24.307238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:54:24.307249 | orchestrator | 2025-09-06 00:54:24.307260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:54:24.307271 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:00.143) 0:00:03.046 **** 2025-09-06 00:54:24.307282 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.307293 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.307310 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.307328 | orchestrator | 2025-09-06 00:54:24.307347 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:54:24.307366 | orchestrator | Saturday 06 September 2025 00:51:17 +0000 (0:00:00.255) 0:00:03.301 **** 2025-09-06 00:54:24.307385 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-06 00:54:24.307435 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-06 00:54:24.307448 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-06 00:54:24.307459 | orchestrator | 2025-09-06 00:54:24.307469 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-06 00:54:24.307480 | orchestrator | 2025-09-06 00:54:24.307491 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-06 00:54:24.307502 | orchestrator | Saturday 06 September 2025 00:51:18 +0000 (0:00:00.444) 0:00:03.746 **** 2025-09-06 00:54:24.307513 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-06 00:54:24.307524 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-06 00:54:24.307535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-06 00:54:24.307545 | orchestrator | 2025-09-06 00:54:24.307556 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-06 00:54:24.307566 | orchestrator | Saturday 06 September 2025 00:51:18 +0000 (0:00:00.360) 0:00:04.106 **** 2025-09-06 00:54:24.307577 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:24.307589 | orchestrator | 2025-09-06 00:54:24.307600 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-06 00:54:24.307611 | orchestrator | Saturday 06 September 2025 00:51:19 +0000 (0:00:00.482) 0:00:04.589 **** 2025-09-06 00:54:24.307685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.307707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.307731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.307743 | orchestrator | 2025-09-06 00:54:24.307765 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-06 00:54:24.307784 | orchestrator | Saturday 06 September 2025 00:51:22 +0000 (0:00:02.951) 0:00:07.540 **** 2025-09-06 00:54:24.307797 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.307811 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.307823 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.307836 | orchestrator | 2025-09-06 00:54:24.307849 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-06 00:54:24.307862 | orchestrator | Saturday 06 September 2025 00:51:22 +0000 (0:00:00.703) 0:00:08.243 **** 2025-09-06 00:54:24.307875 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.307887 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.307899 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.307911 | orchestrator | 2025-09-06 00:54:24.307924 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-06 00:54:24.307936 | orchestrator | Saturday 06 September 2025 00:51:24 +0000 (0:00:01.345) 0:00:09.589 **** 2025-09-06 00:54:24.307950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.307987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.308003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.308024 | orchestrator | 2025-09-06 00:54:24.308037 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-06 00:54:24.308051 | orchestrator | Saturday 06 September 2025 00:51:27 +0000 (0:00:03.539) 0:00:13.128 **** 2025-09-06 00:54:24.308064 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.308077 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.308090 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.308102 | orchestrator | 2025-09-06 00:54:24.308115 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-06 00:54:24.308129 | orchestrator | Saturday 06 September 2025 00:51:28 +0000 (0:00:01.032) 0:00:14.160 **** 2025-09-06 00:54:24.308140 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.308151 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:24.308162 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:24.308172 | orchestrator | 2025-09-06 00:54:24.308183 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-06 00:54:24.308194 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:03.683) 0:00:17.843 **** 2025-09-06 00:54:24.308205 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:24.308216 | orchestrator | 2025-09-06 00:54:24.308227 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-06 00:54:24.308237 | orchestrator | Saturday 06 September 2025 00:51:32 +0000 (0:00:00.561) 0:00:18.405 **** 2025-09-06 00:54:24.308280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308301 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.308314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308325 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.308349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308487 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.308504 | orchestrator | 2025-09-06 00:54:24.308515 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-06 00:54:24.308526 | orchestrator | Saturday 06 September 2025 00:51:36 +0000 (0:00:03.486) 0:00:21.891 **** 2025-09-06 00:54:24.308538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308550 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.308575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308595 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.308608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308620 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.308630 | orchestrator | 2025-09-06 00:54:24.308673 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-06 00:54:24.308693 | orchestrator | Saturday 06 September 2025 00:51:39 +0000 (0:00:02.928) 0:00:24.820 **** 2025-09-06 00:54:24.308712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308746 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.308774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308787 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.308799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-06 00:54:24.308817 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.308828 | orchestrator | 2025-09-06 00:54:24.308839 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-06 00:54:24.308850 | orchestrator | Saturday 06 September 2025 00:51:42 +0000 (0:00:02.920) 0:00:27.741 **** 2025-09-06 00:54:24.308874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.308889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.308921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-06 00:54:24.308934 | orchestrator | 2025-09-06 00:54:24.308945 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-06 00:54:24.308956 | orchestrator | Saturday 06 September 2025 00:51:45 +0000 (0:00:02.866) 0:00:30.607 **** 2025-09-06 00:54:24.308967 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.308978 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:24.308989 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:24.308999 | orchestrator | 2025-09-06 00:54:24.309010 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-06 00:54:24.309021 | orchestrator | Saturday 06 September 2025 00:51:45 +0000 (0:00:00.799) 0:00:31.407 **** 2025-09-06 00:54:24.309032 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.309043 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.309053 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.309064 | orchestrator | 2025-09-06 00:54:24.309075 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-06 00:54:24.309086 | orchestrator | Saturday 06 September 2025 00:51:46 +0000 (0:00:00.611) 0:00:32.019 **** 2025-09-06 00:54:24.309096 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.309110 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.309122 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.309135 | orchestrator | 2025-09-06 00:54:24.309147 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-06 00:54:24.309159 | orchestrator | Saturday 06 September 2025 00:51:46 +0000 (0:00:00.322) 0:00:32.342 **** 2025-09-06 00:54:24.309173 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-06 00:54:24.309186 | orchestrator | ...ignoring 2025-09-06 00:54:24.309200 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-06 00:54:24.309213 | orchestrator | ...ignoring 2025-09-06 00:54:24.309225 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-06 00:54:24.309247 | orchestrator | ...ignoring 2025-09-06 00:54:24.309260 | orchestrator | 2025-09-06 00:54:24.309272 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-06 00:54:24.309285 | orchestrator | Saturday 06 September 2025 00:51:57 +0000 (0:00:10.944) 0:00:43.286 **** 2025-09-06 00:54:24.309297 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.309309 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.309322 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.309334 | orchestrator | 2025-09-06 00:54:24.309347 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-06 00:54:24.309360 | orchestrator | Saturday 06 September 2025 00:51:58 +0000 (0:00:00.426) 0:00:43.713 **** 2025-09-06 00:54:24.309372 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.309385 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309397 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309409 | orchestrator | 2025-09-06 00:54:24.309423 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-06 00:54:24.309435 | orchestrator | Saturday 06 September 2025 00:51:58 +0000 (0:00:00.699) 0:00:44.412 **** 2025-09-06 00:54:24.309448 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.309461 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309472 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309483 | orchestrator | 2025-09-06 00:54:24.309493 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-06 00:54:24.309504 | orchestrator | Saturday 06 September 2025 00:51:59 +0000 (0:00:00.430) 0:00:44.843 **** 2025-09-06 00:54:24.309515 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.309526 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309537 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309547 | orchestrator | 2025-09-06 00:54:24.309558 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-06 00:54:24.309569 | orchestrator | Saturday 06 September 2025 00:51:59 +0000 (0:00:00.434) 0:00:45.278 **** 2025-09-06 00:54:24.309580 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.309591 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.309602 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.309612 | orchestrator | 2025-09-06 00:54:24.309623 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-06 00:54:24.309655 | orchestrator | Saturday 06 September 2025 00:52:00 +0000 (0:00:00.427) 0:00:45.706 **** 2025-09-06 00:54:24.309675 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.309691 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309702 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309713 | orchestrator | 2025-09-06 00:54:24.309724 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-06 00:54:24.309735 | orchestrator | Saturday 06 September 2025 00:52:00 +0000 (0:00:00.617) 0:00:46.323 **** 2025-09-06 00:54:24.309746 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309756 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309767 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-06 00:54:24.309778 | orchestrator | 2025-09-06 00:54:24.309788 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-06 00:54:24.309799 | orchestrator | Saturday 06 September 2025 00:52:01 +0000 (0:00:00.386) 0:00:46.710 **** 2025-09-06 00:54:24.309810 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.309821 | orchestrator | 2025-09-06 00:54:24.309831 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-06 00:54:24.309842 | orchestrator | Saturday 06 September 2025 00:52:11 +0000 (0:00:10.272) 0:00:56.983 **** 2025-09-06 00:54:24.309853 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.309864 | orchestrator | 2025-09-06 00:54:24.309874 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-06 00:54:24.309885 | orchestrator | Saturday 06 September 2025 00:52:11 +0000 (0:00:00.116) 0:00:57.099 **** 2025-09-06 00:54:24.309905 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.309933 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.309944 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.309955 | orchestrator | 2025-09-06 00:54:24.309966 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-06 00:54:24.309977 | orchestrator | Saturday 06 September 2025 00:52:12 +0000 (0:00:00.949) 0:00:58.049 **** 2025-09-06 00:54:24.309987 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.309998 | orchestrator | 2025-09-06 00:54:24.310009 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-06 00:54:24.310179 | orchestrator | Saturday 06 September 2025 00:52:19 +0000 (0:00:07.470) 0:01:05.520 **** 2025-09-06 00:54:24.310192 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.310203 | orchestrator | 2025-09-06 00:54:24.310214 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-06 00:54:24.310225 | orchestrator | Saturday 06 September 2025 00:52:21 +0000 (0:00:01.693) 0:01:07.213 **** 2025-09-06 00:54:24.310236 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.310246 | orchestrator | 2025-09-06 00:54:24.310257 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-06 00:54:24.310268 | orchestrator | Saturday 06 September 2025 00:52:23 +0000 (0:00:02.303) 0:01:09.516 **** 2025-09-06 00:54:24.310279 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.310290 | orchestrator | 2025-09-06 00:54:24.310301 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-06 00:54:24.310312 | orchestrator | Saturday 06 September 2025 00:52:24 +0000 (0:00:00.133) 0:01:09.650 **** 2025-09-06 00:54:24.310322 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.310333 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.310344 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.310355 | orchestrator | 2025-09-06 00:54:24.310381 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-06 00:54:24.310392 | orchestrator | Saturday 06 September 2025 00:52:24 +0000 (0:00:00.333) 0:01:09.983 **** 2025-09-06 00:54:24.310403 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.310414 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-06 00:54:24.310425 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:24.310435 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:24.310446 | orchestrator | 2025-09-06 00:54:24.310457 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-06 00:54:24.310541 | orchestrator | skipping: no hosts matched 2025-09-06 00:54:24.310557 | orchestrator | 2025-09-06 00:54:24.310568 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-06 00:54:24.310579 | orchestrator | 2025-09-06 00:54:24.310590 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-06 00:54:24.310601 | orchestrator | Saturday 06 September 2025 00:52:25 +0000 (0:00:00.560) 0:01:10.544 **** 2025-09-06 00:54:24.310611 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:54:24.310622 | orchestrator | 2025-09-06 00:54:24.310633 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-06 00:54:24.310671 | orchestrator | Saturday 06 September 2025 00:52:43 +0000 (0:00:18.331) 0:01:28.875 **** 2025-09-06 00:54:24.310682 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.310692 | orchestrator | 2025-09-06 00:54:24.310703 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-06 00:54:24.310714 | orchestrator | Saturday 06 September 2025 00:53:03 +0000 (0:00:20.622) 0:01:49.498 **** 2025-09-06 00:54:24.310725 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.310735 | orchestrator | 2025-09-06 00:54:24.310746 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-06 00:54:24.310757 | orchestrator | 2025-09-06 00:54:24.310767 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-06 00:54:24.310778 | orchestrator | Saturday 06 September 2025 00:53:06 +0000 (0:00:02.516) 0:01:52.014 **** 2025-09-06 00:54:24.310798 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:54:24.310809 | orchestrator | 2025-09-06 00:54:24.310819 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-06 00:54:24.310830 | orchestrator | Saturday 06 September 2025 00:53:30 +0000 (0:00:24.456) 0:02:16.470 **** 2025-09-06 00:54:24.310841 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.310852 | orchestrator | 2025-09-06 00:54:24.310863 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-06 00:54:24.310873 | orchestrator | Saturday 06 September 2025 00:53:47 +0000 (0:00:16.581) 0:02:33.052 **** 2025-09-06 00:54:24.310884 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.310895 | orchestrator | 2025-09-06 00:54:24.310906 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-06 00:54:24.310916 | orchestrator | 2025-09-06 00:54:24.310942 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-06 00:54:24.310954 | orchestrator | Saturday 06 September 2025 00:53:49 +0000 (0:00:02.423) 0:02:35.475 **** 2025-09-06 00:54:24.310965 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.310976 | orchestrator | 2025-09-06 00:54:24.310986 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-06 00:54:24.310997 | orchestrator | Saturday 06 September 2025 00:54:01 +0000 (0:00:11.504) 0:02:46.980 **** 2025-09-06 00:54:24.311008 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.311019 | orchestrator | 2025-09-06 00:54:24.311029 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-06 00:54:24.311040 | orchestrator | Saturday 06 September 2025 00:54:06 +0000 (0:00:04.647) 0:02:51.627 **** 2025-09-06 00:54:24.311051 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.311062 | orchestrator | 2025-09-06 00:54:24.311072 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-06 00:54:24.311083 | orchestrator | 2025-09-06 00:54:24.311094 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-06 00:54:24.311105 | orchestrator | Saturday 06 September 2025 00:54:08 +0000 (0:00:02.707) 0:02:54.334 **** 2025-09-06 00:54:24.311115 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:54:24.311126 | orchestrator | 2025-09-06 00:54:24.311137 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-06 00:54:24.311147 | orchestrator | Saturday 06 September 2025 00:54:09 +0000 (0:00:00.518) 0:02:54.853 **** 2025-09-06 00:54:24.311158 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.311169 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.311183 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.311195 | orchestrator | 2025-09-06 00:54:24.311208 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-06 00:54:24.311220 | orchestrator | Saturday 06 September 2025 00:54:11 +0000 (0:00:02.302) 0:02:57.156 **** 2025-09-06 00:54:24.311233 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.311245 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.311257 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.311270 | orchestrator | 2025-09-06 00:54:24.311283 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-06 00:54:24.311296 | orchestrator | Saturday 06 September 2025 00:54:14 +0000 (0:00:02.453) 0:02:59.610 **** 2025-09-06 00:54:24.311308 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.311320 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.311332 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.311344 | orchestrator | 2025-09-06 00:54:24.311358 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-06 00:54:24.311371 | orchestrator | Saturday 06 September 2025 00:54:16 +0000 (0:00:02.175) 0:03:01.785 **** 2025-09-06 00:54:24.311383 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.311395 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.311414 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:54:24.311427 | orchestrator | 2025-09-06 00:54:24.311439 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-06 00:54:24.311452 | orchestrator | Saturday 06 September 2025 00:54:18 +0000 (0:00:02.123) 0:03:03.909 **** 2025-09-06 00:54:24.311465 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:54:24.311477 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:54:24.311489 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:54:24.311502 | orchestrator | 2025-09-06 00:54:24.311514 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-06 00:54:24.311527 | orchestrator | Saturday 06 September 2025 00:54:21 +0000 (0:00:02.905) 0:03:06.814 **** 2025-09-06 00:54:24.311538 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:54:24.311549 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:54:24.311560 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:54:24.311570 | orchestrator | 2025-09-06 00:54:24.311581 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:54:24.311592 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-06 00:54:24.311603 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-06 00:54:24.311616 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-06 00:54:24.311627 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-06 00:54:24.311675 | orchestrator | 2025-09-06 00:54:24.311696 | orchestrator | 2025-09-06 00:54:24.311713 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:54:24.311728 | orchestrator | Saturday 06 September 2025 00:54:21 +0000 (0:00:00.432) 0:03:07.247 **** 2025-09-06 00:54:24.311739 | orchestrator | =============================================================================== 2025-09-06 00:54:24.311750 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.79s 2025-09-06 00:54:24.311760 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.20s 2025-09-06 00:54:24.311771 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.50s 2025-09-06 00:54:24.311782 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2025-09-06 00:54:24.311792 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.27s 2025-09-06 00:54:24.311803 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.47s 2025-09-06 00:54:24.311820 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.94s 2025-09-06 00:54:24.311838 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2025-09-06 00:54:24.311849 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.68s 2025-09-06 00:54:24.311860 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.54s 2025-09-06 00:54:24.311871 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.49s 2025-09-06 00:54:24.311881 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.95s 2025-09-06 00:54:24.311892 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.93s 2025-09-06 00:54:24.311903 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.92s 2025-09-06 00:54:24.311914 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.91s 2025-09-06 00:54:24.311924 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.87s 2025-09-06 00:54:24.311935 | orchestrator | Check MariaDB service --------------------------------------------------- 2.74s 2025-09-06 00:54:24.311953 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.71s 2025-09-06 00:54:24.311964 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.45s 2025-09-06 00:54:24.311975 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.30s 2025-09-06 00:54:24.311985 | orchestrator | 2025-09-06 00:54:24 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:24.311996 | orchestrator | 2025-09-06 00:54:24 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:24.312007 | orchestrator | 2025-09-06 00:54:24 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:24.312019 | orchestrator | 2025-09-06 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:27.356092 | orchestrator | 2025-09-06 00:54:27 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:27.356459 | orchestrator | 2025-09-06 00:54:27 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:27.357237 | orchestrator | 2025-09-06 00:54:27 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:27.357355 | orchestrator | 2025-09-06 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:30.401543 | orchestrator | 2025-09-06 00:54:30 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:30.401685 | orchestrator | 2025-09-06 00:54:30 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:30.403622 | orchestrator | 2025-09-06 00:54:30 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:30.403734 | orchestrator | 2025-09-06 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:33.447834 | orchestrator | 2025-09-06 00:54:33 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:33.451379 | orchestrator | 2025-09-06 00:54:33 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:33.451807 | orchestrator | 2025-09-06 00:54:33 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:33.452414 | orchestrator | 2025-09-06 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:36.487345 | orchestrator | 2025-09-06 00:54:36 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:36.488788 | orchestrator | 2025-09-06 00:54:36 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:36.489461 | orchestrator | 2025-09-06 00:54:36 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:36.489607 | orchestrator | 2025-09-06 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:39.525452 | orchestrator | 2025-09-06 00:54:39 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:39.525713 | orchestrator | 2025-09-06 00:54:39 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:39.526519 | orchestrator | 2025-09-06 00:54:39 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:39.526645 | orchestrator | 2025-09-06 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:42.562539 | orchestrator | 2025-09-06 00:54:42 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:42.563189 | orchestrator | 2025-09-06 00:54:42 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:42.565406 | orchestrator | 2025-09-06 00:54:42 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:42.565798 | orchestrator | 2025-09-06 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:45.621163 | orchestrator | 2025-09-06 00:54:45 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:45.621807 | orchestrator | 2025-09-06 00:54:45 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:45.623930 | orchestrator | 2025-09-06 00:54:45 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:45.623955 | orchestrator | 2025-09-06 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:48.659310 | orchestrator | 2025-09-06 00:54:48 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:48.660033 | orchestrator | 2025-09-06 00:54:48 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:48.660730 | orchestrator | 2025-09-06 00:54:48 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:48.661210 | orchestrator | 2025-09-06 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:51.715292 | orchestrator | 2025-09-06 00:54:51 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:51.716888 | orchestrator | 2025-09-06 00:54:51 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:51.719232 | orchestrator | 2025-09-06 00:54:51 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:51.719290 | orchestrator | 2025-09-06 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:54.762130 | orchestrator | 2025-09-06 00:54:54 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:54.763549 | orchestrator | 2025-09-06 00:54:54 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:54.765471 | orchestrator | 2025-09-06 00:54:54 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:54.765504 | orchestrator | 2025-09-06 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:54:57.813485 | orchestrator | 2025-09-06 00:54:57 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:54:57.814978 | orchestrator | 2025-09-06 00:54:57 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:54:57.817001 | orchestrator | 2025-09-06 00:54:57 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:54:57.817029 | orchestrator | 2025-09-06 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:00.874710 | orchestrator | 2025-09-06 00:55:00 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:00.876567 | orchestrator | 2025-09-06 00:55:00 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:00.880203 | orchestrator | 2025-09-06 00:55:00 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:00.880230 | orchestrator | 2025-09-06 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:03.917524 | orchestrator | 2025-09-06 00:55:03 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:03.918436 | orchestrator | 2025-09-06 00:55:03 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:03.919520 | orchestrator | 2025-09-06 00:55:03 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:03.919571 | orchestrator | 2025-09-06 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:06.959491 | orchestrator | 2025-09-06 00:55:06 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:06.959621 | orchestrator | 2025-09-06 00:55:06 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:06.960132 | orchestrator | 2025-09-06 00:55:06 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:06.960157 | orchestrator | 2025-09-06 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:10.004981 | orchestrator | 2025-09-06 00:55:10 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:10.009476 | orchestrator | 2025-09-06 00:55:10 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:10.013084 | orchestrator | 2025-09-06 00:55:10 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:10.013154 | orchestrator | 2025-09-06 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:13.051480 | orchestrator | 2025-09-06 00:55:13 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:13.051942 | orchestrator | 2025-09-06 00:55:13 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:13.053942 | orchestrator | 2025-09-06 00:55:13 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:13.053967 | orchestrator | 2025-09-06 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:16.094835 | orchestrator | 2025-09-06 00:55:16 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:16.096584 | orchestrator | 2025-09-06 00:55:16 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:16.099251 | orchestrator | 2025-09-06 00:55:16 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:16.099297 | orchestrator | 2025-09-06 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:19.146123 | orchestrator | 2025-09-06 00:55:19 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:19.147433 | orchestrator | 2025-09-06 00:55:19 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:19.149062 | orchestrator | 2025-09-06 00:55:19 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:19.149337 | orchestrator | 2025-09-06 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:22.193894 | orchestrator | 2025-09-06 00:55:22 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:22.195173 | orchestrator | 2025-09-06 00:55:22 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:22.197722 | orchestrator | 2025-09-06 00:55:22 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:22.197896 | orchestrator | 2025-09-06 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:25.244474 | orchestrator | 2025-09-06 00:55:25 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:25.247530 | orchestrator | 2025-09-06 00:55:25 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state STARTED 2025-09-06 00:55:25.250184 | orchestrator | 2025-09-06 00:55:25 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:25.250322 | orchestrator | 2025-09-06 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:28.292458 | orchestrator | 2025-09-06 00:55:28 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:28.296445 | orchestrator | 2025-09-06 00:55:28 | INFO  | Task a639098c-a36c-45f6-87af-239f58deac7f is in state SUCCESS 2025-09-06 00:55:28.299237 | orchestrator | 2025-09-06 00:55:28.299313 | orchestrator | 2025-09-06 00:55:28.299650 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-06 00:55:28.299670 | orchestrator | 2025-09-06 00:55:28.299682 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-06 00:55:28.299694 | orchestrator | Saturday 06 September 2025 00:53:16 +0000 (0:00:00.545) 0:00:00.545 **** 2025-09-06 00:55:28.299706 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:55:28.299718 | orchestrator | 2025-09-06 00:55:28.299729 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-06 00:55:28.299740 | orchestrator | Saturday 06 September 2025 00:53:17 +0000 (0:00:00.535) 0:00:01.081 **** 2025-09-06 00:55:28.299751 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.299763 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.300851 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.300869 | orchestrator | 2025-09-06 00:55:28.300881 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-06 00:55:28.300893 | orchestrator | Saturday 06 September 2025 00:53:18 +0000 (0:00:00.573) 0:00:01.654 **** 2025-09-06 00:55:28.300903 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.300914 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.300925 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.300936 | orchestrator | 2025-09-06 00:55:28.300947 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-06 00:55:28.300958 | orchestrator | Saturday 06 September 2025 00:53:18 +0000 (0:00:00.271) 0:00:01.926 **** 2025-09-06 00:55:28.300969 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.300980 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.300992 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301003 | orchestrator | 2025-09-06 00:55:28.301014 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-06 00:55:28.301025 | orchestrator | Saturday 06 September 2025 00:53:18 +0000 (0:00:00.692) 0:00:02.618 **** 2025-09-06 00:55:28.301035 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.301046 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.301057 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301067 | orchestrator | 2025-09-06 00:55:28.301085 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-06 00:55:28.301096 | orchestrator | Saturday 06 September 2025 00:53:19 +0000 (0:00:00.288) 0:00:02.907 **** 2025-09-06 00:55:28.301107 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.301118 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.301128 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301139 | orchestrator | 2025-09-06 00:55:28.301150 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-06 00:55:28.301161 | orchestrator | Saturday 06 September 2025 00:53:19 +0000 (0:00:00.238) 0:00:03.146 **** 2025-09-06 00:55:28.301172 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.301182 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.301193 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301204 | orchestrator | 2025-09-06 00:55:28.301215 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-06 00:55:28.301226 | orchestrator | Saturday 06 September 2025 00:53:19 +0000 (0:00:00.264) 0:00:03.410 **** 2025-09-06 00:55:28.301237 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.301249 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.301260 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.301270 | orchestrator | 2025-09-06 00:55:28.301281 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-06 00:55:28.301316 | orchestrator | Saturday 06 September 2025 00:53:20 +0000 (0:00:00.373) 0:00:03.783 **** 2025-09-06 00:55:28.301327 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.301338 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.301349 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301360 | orchestrator | 2025-09-06 00:55:28.301371 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-06 00:55:28.301382 | orchestrator | Saturday 06 September 2025 00:53:20 +0000 (0:00:00.258) 0:00:04.041 **** 2025-09-06 00:55:28.301393 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:55:28.301403 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:55:28.301415 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:55:28.301428 | orchestrator | 2025-09-06 00:55:28.301440 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-06 00:55:28.301453 | orchestrator | Saturday 06 September 2025 00:53:21 +0000 (0:00:00.588) 0:00:04.630 **** 2025-09-06 00:55:28.301466 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.301479 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.301490 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.301503 | orchestrator | 2025-09-06 00:55:28.301515 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-06 00:55:28.301528 | orchestrator | Saturday 06 September 2025 00:53:21 +0000 (0:00:00.345) 0:00:04.975 **** 2025-09-06 00:55:28.301559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:55:28.301572 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:55:28.301585 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:55:28.301597 | orchestrator | 2025-09-06 00:55:28.301610 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-06 00:55:28.301623 | orchestrator | Saturday 06 September 2025 00:53:23 +0000 (0:00:02.113) 0:00:07.089 **** 2025-09-06 00:55:28.301636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-06 00:55:28.301650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-06 00:55:28.301662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-06 00:55:28.301675 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.301688 | orchestrator | 2025-09-06 00:55:28.301701 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-06 00:55:28.301766 | orchestrator | Saturday 06 September 2025 00:53:23 +0000 (0:00:00.417) 0:00:07.507 **** 2025-09-06 00:55:28.301783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301820 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.301831 | orchestrator | 2025-09-06 00:55:28.301841 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-06 00:55:28.301852 | orchestrator | Saturday 06 September 2025 00:53:24 +0000 (0:00:00.812) 0:00:08.319 **** 2025-09-06 00:55:28.301866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.301918 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.301929 | orchestrator | 2025-09-06 00:55:28.301940 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-06 00:55:28.301951 | orchestrator | Saturday 06 September 2025 00:53:24 +0000 (0:00:00.158) 0:00:08.478 **** 2025-09-06 00:55:28.301965 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b667102984a4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-06 00:53:21.947815', 'end': '2025-09-06 00:53:21.998745', 'delta': '0:00:00.050930', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b667102984a4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-06 00:55:28.301981 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b1df3cd20d27', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-06 00:53:22.707882', 'end': '2025-09-06 00:53:22.756357', 'delta': '0:00:00.048475', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b1df3cd20d27'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-06 00:55:28.302115 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f1d3bd1a83b1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-06 00:53:23.284929', 'end': '2025-09-06 00:53:23.323458', 'delta': '0:00:00.038529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f1d3bd1a83b1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-06 00:55:28.302135 | orchestrator | 2025-09-06 00:55:28.302146 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-06 00:55:28.302157 | orchestrator | Saturday 06 September 2025 00:53:25 +0000 (0:00:00.369) 0:00:08.847 **** 2025-09-06 00:55:28.302177 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.302188 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.302198 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.302209 | orchestrator | 2025-09-06 00:55:28.302220 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-06 00:55:28.302231 | orchestrator | Saturday 06 September 2025 00:53:25 +0000 (0:00:00.441) 0:00:09.289 **** 2025-09-06 00:55:28.302242 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-06 00:55:28.302253 | orchestrator | 2025-09-06 00:55:28.302264 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-06 00:55:28.302274 | orchestrator | Saturday 06 September 2025 00:53:27 +0000 (0:00:01.775) 0:00:11.065 **** 2025-09-06 00:55:28.302285 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302296 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302307 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302317 | orchestrator | 2025-09-06 00:55:28.302328 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-06 00:55:28.302344 | orchestrator | Saturday 06 September 2025 00:53:27 +0000 (0:00:00.298) 0:00:11.363 **** 2025-09-06 00:55:28.302355 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302366 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302377 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302387 | orchestrator | 2025-09-06 00:55:28.302398 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-06 00:55:28.302409 | orchestrator | Saturday 06 September 2025 00:53:28 +0000 (0:00:00.397) 0:00:11.760 **** 2025-09-06 00:55:28.302420 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302430 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302441 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302452 | orchestrator | 2025-09-06 00:55:28.302462 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-06 00:55:28.302473 | orchestrator | Saturday 06 September 2025 00:53:28 +0000 (0:00:00.460) 0:00:12.221 **** 2025-09-06 00:55:28.302484 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.302494 | orchestrator | 2025-09-06 00:55:28.302505 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-06 00:55:28.302516 | orchestrator | Saturday 06 September 2025 00:53:28 +0000 (0:00:00.134) 0:00:12.355 **** 2025-09-06 00:55:28.302526 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302537 | orchestrator | 2025-09-06 00:55:28.302617 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-06 00:55:28.302629 | orchestrator | Saturday 06 September 2025 00:53:28 +0000 (0:00:00.227) 0:00:12.583 **** 2025-09-06 00:55:28.302639 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302648 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302658 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302667 | orchestrator | 2025-09-06 00:55:28.302677 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-06 00:55:28.302686 | orchestrator | Saturday 06 September 2025 00:53:29 +0000 (0:00:00.290) 0:00:12.873 **** 2025-09-06 00:55:28.302696 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302705 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302715 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302724 | orchestrator | 2025-09-06 00:55:28.302734 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-06 00:55:28.302744 | orchestrator | Saturday 06 September 2025 00:53:29 +0000 (0:00:00.321) 0:00:13.195 **** 2025-09-06 00:55:28.302753 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302763 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302772 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302782 | orchestrator | 2025-09-06 00:55:28.302791 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-06 00:55:28.302801 | orchestrator | Saturday 06 September 2025 00:53:30 +0000 (0:00:00.479) 0:00:13.674 **** 2025-09-06 00:55:28.302823 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302833 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302842 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302852 | orchestrator | 2025-09-06 00:55:28.302862 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-06 00:55:28.302871 | orchestrator | Saturday 06 September 2025 00:53:30 +0000 (0:00:00.313) 0:00:13.988 **** 2025-09-06 00:55:28.302881 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302891 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302900 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302910 | orchestrator | 2025-09-06 00:55:28.302919 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-06 00:55:28.302929 | orchestrator | Saturday 06 September 2025 00:53:30 +0000 (0:00:00.309) 0:00:14.298 **** 2025-09-06 00:55:28.302939 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.302948 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.302958 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.302967 | orchestrator | 2025-09-06 00:55:28.302977 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-06 00:55:28.303021 | orchestrator | Saturday 06 September 2025 00:53:31 +0000 (0:00:00.333) 0:00:14.631 **** 2025-09-06 00:55:28.303033 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.303043 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.303052 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.303062 | orchestrator | 2025-09-06 00:55:28.303071 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-06 00:55:28.303081 | orchestrator | Saturday 06 September 2025 00:53:31 +0000 (0:00:00.504) 0:00:15.135 **** 2025-09-06 00:55:28.303092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567', 'dm-uuid-LVM-r6f80mz9e22Vmz3H2GU0Ef84wrC6l1Ff93I1fJ96d512su5aeJbRbgGkDCiB9O2q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3', 'dm-uuid-LVM-HjJ7GKB5yLddflqwhdAdEzWzRwWiY2ZFQVTzzIqNyOhGoOnap4BNgRvCK8MrZYxN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6', 'dm-uuid-LVM-cu4va0YeCfZXWXc5bD75hcGN10dQTwekTMF8ZROPB9Y9NfccWc0R6zJniHlIFj9E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tnpbur-SzDW-WQ8q-U5AF-PBL6-Su6o-vobvpB', 'scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8', 'scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0', 'dm-uuid-LVM-fycm5QQlGOho71zbS5RzdZZtfc1SZaX2Hr30eDfIJy9FbnEjzTcsZcaeVbsXeROx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iCop39-Riup-vFef-zeMs-SIWe-bIEY-BLh0Jz', 'scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff', 'scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5', 'scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7DZGc-sTzo-CcSh-NBkn-EaEg-3VEw-R0yWte', 'scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74', 'scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qg95jI-e6Nq-hStd-2tYS-uIOn-9vf1-GjhhtD', 'scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba', 'scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303530 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.303564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7', 'scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303586 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.303596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f', 'dm-uuid-LVM-S1sgywPEkjpv9d0wsFPQU3cEbxfDfA6xyq1Srsvdb8p4ZPCF91EIdWz8Ul8NLFKG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709', 'dm-uuid-LVM-gsAMb0k6MRCpv6Q1MlP1kUCTMe8oIPXrC4bOdSsf658daatcLZ99by6ZGXXFUiqT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-06 00:55:28.303723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MmVSvD-xcAG-cedU-B8my-xGk5-nlg6-2Khtre', 'scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b', 'scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AQIorT-mZph-jOfK-swZz-e1si-sNhx-b5mwDO', 'scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4', 'scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634', 'scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-06 00:55:28.303789 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.303799 | orchestrator | 2025-09-06 00:55:28.303809 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-06 00:55:28.303818 | orchestrator | Saturday 06 September 2025 00:53:32 +0000 (0:00:00.612) 0:00:15.747 **** 2025-09-06 00:55:28.303829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567', 'dm-uuid-LVM-r6f80mz9e22Vmz3H2GU0Ef84wrC6l1Ff93I1fJ96d512su5aeJbRbgGkDCiB9O2q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3', 'dm-uuid-LVM-HjJ7GKB5yLddflqwhdAdEzWzRwWiY2ZFQVTzzIqNyOhGoOnap4BNgRvCK8MrZYxN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6', 'dm-uuid-LVM-cu4va0YeCfZXWXc5bD75hcGN10dQTwekTMF8ZROPB9Y9NfccWc0R6zJniHlIFj9E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0', 'dm-uuid-LVM-fycm5QQlGOho71zbS5RzdZZtfc1SZaX2Hr30eDfIJy9FbnEjzTcsZcaeVbsXeROx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.303994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16', 'scsi-SQEMU_QEMU_HARDDISK_078fca39-f411-429e-9193-aac97937ed20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567-osd--block--6c2b7b83--cfe0--5d78--88e9--40d3d3c4d567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tnpbur-SzDW-WQ8q-U5AF-PBL6-Su6o-vobvpB', 'scsi-0QEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8', 'scsi-SQEMU_QEMU_HARDDISK_25619c3a-8da8-43cb-a754-e63f9339b6a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304039 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e6b4ea58--4fde--56e5--979f--346e927a82c3-osd--block--e6b4ea58--4fde--56e5--979f--346e927a82c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iCop39-Riup-vFef-zeMs-SIWe-bIEY-BLh0Jz', 'scsi-0QEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff', 'scsi-SQEMU_QEMU_HARDDISK_ff2df27d-11ce-481a-9d5b-51960fd8aeff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5', 'scsi-SQEMU_QEMU_HARDDISK_4b95c2e9-50f3-4582-afe8-fe749e38f7c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304137 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.304151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_f18475ff-2e12-4c30-992f-77f53bec54c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9969153--fa79--5368--8c16--a33775dfe5f6-osd--block--e9969153--fa79--5368--8c16--a33775dfe5f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7DZGc-sTzo-CcSh-NBkn-EaEg-3VEw-R0yWte', 'scsi-0QEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74', 'scsi-SQEMU_QEMU_HARDDISK_ff2245c5-2416-47aa-a035-68e781151c74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--473d4611--c66c--5516--9b6d--fd0b18ba2fe0-osd--block--473d4611--c66c--5516--9b6d--fd0b18ba2fe0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qg95jI-e6Nq-hStd-2tYS-uIOn-9vf1-GjhhtD', 'scsi-0QEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba', 'scsi-SQEMU_QEMU_HARDDISK_60cce0b1-ac13-42c3-8474-28bd0504aaba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7', 'scsi-SQEMU_QEMU_HARDDISK_8526d803-93b6-4435-afbc-8fa992e96ed7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304250 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304266 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.304277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f', 'dm-uuid-LVM-S1sgywPEkjpv9d0wsFPQU3cEbxfDfA6xyq1Srsvdb8p4ZPCF91EIdWz8Ul8NLFKG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709', 'dm-uuid-LVM-gsAMb0k6MRCpv6Q1MlP1kUCTMe8oIPXrC4bOdSsf658daatcLZ99by6ZGXXFUiqT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304321 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b07e0f-1842-4d83-ac35-c8852fb0b626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f-osd--block--6f5e0d3a--48d2--5dc7--b4c5--38e7a8a8ed6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MmVSvD-xcAG-cedU-B8my-xGk5-nlg6-2Khtre', 'scsi-0QEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b', 'scsi-SQEMU_QEMU_HARDDISK_8fcef200-ddbb-407c-9fba-bf8a684fde8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d801673f--a74f--56ad--ad0d--e97588ff4709-osd--block--d801673f--a74f--56ad--ad0d--e97588ff4709'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AQIorT-mZph-jOfK-swZz-e1si-sNhx-b5mwDO', 'scsi-0QEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4', 'scsi-SQEMU_QEMU_HARDDISK_a6f67441-1efd-42d1-ae3b-c249d4af45c4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634', 'scsi-SQEMU_QEMU_HARDDISK_59e1d33e-4f47-4176-9d4f-6bd749639634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-06 00:55:28.304484 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.304494 | orchestrator | 2025-09-06 00:55:28.304503 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-06 00:55:28.304513 | orchestrator | Saturday 06 September 2025 00:53:32 +0000 (0:00:00.621) 0:00:16.369 **** 2025-09-06 00:55:28.304523 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.304533 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.304559 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.304569 | orchestrator | 2025-09-06 00:55:28.304579 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-06 00:55:28.304588 | orchestrator | Saturday 06 September 2025 00:53:33 +0000 (0:00:00.691) 0:00:17.061 **** 2025-09-06 00:55:28.304598 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.304608 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.304617 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.304626 | orchestrator | 2025-09-06 00:55:28.304636 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-06 00:55:28.304646 | orchestrator | Saturday 06 September 2025 00:53:33 +0000 (0:00:00.487) 0:00:17.548 **** 2025-09-06 00:55:28.304655 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.304665 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.304674 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.304684 | orchestrator | 2025-09-06 00:55:28.304693 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-06 00:55:28.304703 | orchestrator | Saturday 06 September 2025 00:53:34 +0000 (0:00:00.660) 0:00:18.209 **** 2025-09-06 00:55:28.304713 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.304722 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.304732 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.304741 | orchestrator | 2025-09-06 00:55:28.304750 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-06 00:55:28.304764 | orchestrator | Saturday 06 September 2025 00:53:34 +0000 (0:00:00.301) 0:00:18.510 **** 2025-09-06 00:55:28.304775 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.304784 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.304794 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.304803 | orchestrator | 2025-09-06 00:55:28.304813 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-06 00:55:28.304822 | orchestrator | Saturday 06 September 2025 00:53:35 +0000 (0:00:00.407) 0:00:18.918 **** 2025-09-06 00:55:28.304832 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.304841 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.304851 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.304860 | orchestrator | 2025-09-06 00:55:28.304869 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-06 00:55:28.304879 | orchestrator | Saturday 06 September 2025 00:53:35 +0000 (0:00:00.486) 0:00:19.404 **** 2025-09-06 00:55:28.304889 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-06 00:55:28.304899 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-06 00:55:28.304908 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-06 00:55:28.304918 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-06 00:55:28.304928 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-06 00:55:28.304937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-06 00:55:28.304947 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-06 00:55:28.304962 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-06 00:55:28.304972 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-06 00:55:28.304981 | orchestrator | 2025-09-06 00:55:28.304991 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-06 00:55:28.305001 | orchestrator | Saturday 06 September 2025 00:53:36 +0000 (0:00:00.848) 0:00:20.252 **** 2025-09-06 00:55:28.305010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-06 00:55:28.305020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-06 00:55:28.305029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-06 00:55:28.305039 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305048 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-06 00:55:28.305058 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-06 00:55:28.305067 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-06 00:55:28.305076 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.305086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-06 00:55:28.305095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-06 00:55:28.305104 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-06 00:55:28.305114 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.305123 | orchestrator | 2025-09-06 00:55:28.305133 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-06 00:55:28.305142 | orchestrator | Saturday 06 September 2025 00:53:36 +0000 (0:00:00.352) 0:00:20.605 **** 2025-09-06 00:55:28.305152 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:55:28.305162 | orchestrator | 2025-09-06 00:55:28.305172 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-06 00:55:28.305182 | orchestrator | Saturday 06 September 2025 00:53:37 +0000 (0:00:00.693) 0:00:21.298 **** 2025-09-06 00:55:28.305192 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305202 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.305211 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.305221 | orchestrator | 2025-09-06 00:55:28.305235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-06 00:55:28.305245 | orchestrator | Saturday 06 September 2025 00:53:37 +0000 (0:00:00.316) 0:00:21.615 **** 2025-09-06 00:55:28.305255 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305265 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.305274 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.305284 | orchestrator | 2025-09-06 00:55:28.305293 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-06 00:55:28.305303 | orchestrator | Saturday 06 September 2025 00:53:38 +0000 (0:00:00.338) 0:00:21.954 **** 2025-09-06 00:55:28.305313 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305322 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.305332 | orchestrator | skipping: [testbed-node-5] 2025-09-06 00:55:28.305341 | orchestrator | 2025-09-06 00:55:28.305351 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-06 00:55:28.305360 | orchestrator | Saturday 06 September 2025 00:53:38 +0000 (0:00:00.383) 0:00:22.337 **** 2025-09-06 00:55:28.305370 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.305379 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.305389 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.305398 | orchestrator | 2025-09-06 00:55:28.305408 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-06 00:55:28.305417 | orchestrator | Saturday 06 September 2025 00:53:39 +0000 (0:00:00.615) 0:00:22.952 **** 2025-09-06 00:55:28.305427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:55:28.305442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:55:28.305451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:55:28.305461 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305470 | orchestrator | 2025-09-06 00:55:28.305480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-06 00:55:28.305489 | orchestrator | Saturday 06 September 2025 00:53:39 +0000 (0:00:00.367) 0:00:23.320 **** 2025-09-06 00:55:28.305499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:55:28.305509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:55:28.305518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:55:28.305528 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305537 | orchestrator | 2025-09-06 00:55:28.305562 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-06 00:55:28.305572 | orchestrator | Saturday 06 September 2025 00:53:40 +0000 (0:00:00.376) 0:00:23.696 **** 2025-09-06 00:55:28.305582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-06 00:55:28.305591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-06 00:55:28.305601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-06 00:55:28.305610 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305620 | orchestrator | 2025-09-06 00:55:28.305629 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-06 00:55:28.305639 | orchestrator | Saturday 06 September 2025 00:53:40 +0000 (0:00:00.426) 0:00:24.123 **** 2025-09-06 00:55:28.305648 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:55:28.305658 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:55:28.305667 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:55:28.305677 | orchestrator | 2025-09-06 00:55:28.305687 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-06 00:55:28.305696 | orchestrator | Saturday 06 September 2025 00:53:40 +0000 (0:00:00.312) 0:00:24.435 **** 2025-09-06 00:55:28.305706 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-06 00:55:28.305715 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-06 00:55:28.305725 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-06 00:55:28.305734 | orchestrator | 2025-09-06 00:55:28.305744 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-06 00:55:28.305753 | orchestrator | Saturday 06 September 2025 00:53:41 +0000 (0:00:00.507) 0:00:24.942 **** 2025-09-06 00:55:28.305763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:55:28.305773 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:55:28.305782 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:55:28.305792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-06 00:55:28.305801 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-06 00:55:28.305811 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-06 00:55:28.305821 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-06 00:55:28.305830 | orchestrator | 2025-09-06 00:55:28.305840 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-06 00:55:28.305849 | orchestrator | Saturday 06 September 2025 00:53:42 +0000 (0:00:00.960) 0:00:25.903 **** 2025-09-06 00:55:28.305859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-06 00:55:28.305869 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-06 00:55:28.305878 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-06 00:55:28.305887 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-06 00:55:28.305903 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-06 00:55:28.305912 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-06 00:55:28.305922 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-06 00:55:28.305931 | orchestrator | 2025-09-06 00:55:28.305945 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-06 00:55:28.305956 | orchestrator | Saturday 06 September 2025 00:53:44 +0000 (0:00:01.998) 0:00:27.902 **** 2025-09-06 00:55:28.305965 | orchestrator | skipping: [testbed-node-3] 2025-09-06 00:55:28.305975 | orchestrator | skipping: [testbed-node-4] 2025-09-06 00:55:28.305984 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-06 00:55:28.305994 | orchestrator | 2025-09-06 00:55:28.306067 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-06 00:55:28.306080 | orchestrator | Saturday 06 September 2025 00:53:44 +0000 (0:00:00.369) 0:00:28.272 **** 2025-09-06 00:55:28.306091 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:55:28.306101 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:55:28.306111 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:55:28.306125 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:55:28.306135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-06 00:55:28.306145 | orchestrator | 2025-09-06 00:55:28.306155 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-06 00:55:28.306165 | orchestrator | Saturday 06 September 2025 00:54:32 +0000 (0:00:47.352) 0:01:15.624 **** 2025-09-06 00:55:28.306174 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306193 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306203 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306213 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306232 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-06 00:55:28.306242 | orchestrator | 2025-09-06 00:55:28.306251 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-06 00:55:28.306260 | orchestrator | Saturday 06 September 2025 00:54:57 +0000 (0:00:25.071) 0:01:40.695 **** 2025-09-06 00:55:28.306270 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306286 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306296 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306306 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306315 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306324 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306334 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-06 00:55:28.306343 | orchestrator | 2025-09-06 00:55:28.306353 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-06 00:55:28.306363 | orchestrator | Saturday 06 September 2025 00:55:09 +0000 (0:00:12.642) 0:01:53.338 **** 2025-09-06 00:55:28.306372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306381 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306391 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306401 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306410 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306420 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306446 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306455 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306474 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306484 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306493 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306503 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306512 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-06 00:55:28.306532 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-06 00:55:28.306566 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-06 00:55:28.306577 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-06 00:55:28.306586 | orchestrator | 2025-09-06 00:55:28.306596 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:55:28.306605 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-06 00:55:28.306618 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-06 00:55:28.306633 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-06 00:55:28.306643 | orchestrator | 2025-09-06 00:55:28.306653 | orchestrator | 2025-09-06 00:55:28.306662 | orchestrator | 2025-09-06 00:55:28.306672 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:55:28.306681 | orchestrator | Saturday 06 September 2025 00:55:27 +0000 (0:00:17.618) 0:02:10.956 **** 2025-09-06 00:55:28.306691 | orchestrator | =============================================================================== 2025-09-06 00:55:28.306709 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.35s 2025-09-06 00:55:28.306719 | orchestrator | generate keys ---------------------------------------------------------- 25.07s 2025-09-06 00:55:28.306729 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.62s 2025-09-06 00:55:28.306738 | orchestrator | get keys from monitors ------------------------------------------------- 12.64s 2025-09-06 00:55:28.306748 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2025-09-06 00:55:28.306757 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2025-09-06 00:55:28.306767 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.78s 2025-09-06 00:55:28.306776 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.96s 2025-09-06 00:55:28.306786 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-09-06 00:55:28.306796 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2025-09-06 00:55:28.306805 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2025-09-06 00:55:28.306815 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.69s 2025-09-06 00:55:28.306824 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-09-06 00:55:28.306834 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-09-06 00:55:28.306843 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2025-09-06 00:55:28.306853 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-09-06 00:55:28.306863 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2025-09-06 00:55:28.306872 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2025-09-06 00:55:28.306882 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.57s 2025-09-06 00:55:28.306891 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.54s 2025-09-06 00:55:28.306901 | orchestrator | 2025-09-06 00:55:28 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:28.306911 | orchestrator | 2025-09-06 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:31.344289 | orchestrator | 2025-09-06 00:55:31 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:31.347343 | orchestrator | 2025-09-06 00:55:31 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:31.349178 | orchestrator | 2025-09-06 00:55:31 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:31.349589 | orchestrator | 2025-09-06 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:34.391816 | orchestrator | 2025-09-06 00:55:34 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:34.394473 | orchestrator | 2025-09-06 00:55:34 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:34.396323 | orchestrator | 2025-09-06 00:55:34 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:34.396793 | orchestrator | 2025-09-06 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:37.451231 | orchestrator | 2025-09-06 00:55:37 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:37.451662 | orchestrator | 2025-09-06 00:55:37 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:37.453104 | orchestrator | 2025-09-06 00:55:37 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:37.453161 | orchestrator | 2025-09-06 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:40.494704 | orchestrator | 2025-09-06 00:55:40 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:40.496189 | orchestrator | 2025-09-06 00:55:40 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:40.498381 | orchestrator | 2025-09-06 00:55:40 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:40.498408 | orchestrator | 2025-09-06 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:43.546661 | orchestrator | 2025-09-06 00:55:43 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:43.547166 | orchestrator | 2025-09-06 00:55:43 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:43.556357 | orchestrator | 2025-09-06 00:55:43 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:43.556383 | orchestrator | 2025-09-06 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:46.607132 | orchestrator | 2025-09-06 00:55:46 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:46.609109 | orchestrator | 2025-09-06 00:55:46 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:46.610673 | orchestrator | 2025-09-06 00:55:46 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:46.610774 | orchestrator | 2025-09-06 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:49.658464 | orchestrator | 2025-09-06 00:55:49 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:49.659869 | orchestrator | 2025-09-06 00:55:49 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:49.661431 | orchestrator | 2025-09-06 00:55:49 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:49.661631 | orchestrator | 2025-09-06 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:52.717457 | orchestrator | 2025-09-06 00:55:52 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:52.719891 | orchestrator | 2025-09-06 00:55:52 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:52.721311 | orchestrator | 2025-09-06 00:55:52 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:52.721337 | orchestrator | 2025-09-06 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:55.766918 | orchestrator | 2025-09-06 00:55:55 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:55.767860 | orchestrator | 2025-09-06 00:55:55 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state STARTED 2025-09-06 00:55:55.769177 | orchestrator | 2025-09-06 00:55:55 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:55.769608 | orchestrator | 2025-09-06 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:55:58.827862 | orchestrator | 2025-09-06 00:55:58 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:55:58.828206 | orchestrator | 2025-09-06 00:55:58 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:55:58.829720 | orchestrator | 2025-09-06 00:55:58 | INFO  | Task e17840e1-6cf0-4bac-a499-688ea8973030 is in state SUCCESS 2025-09-06 00:55:58.831178 | orchestrator | 2025-09-06 00:55:58 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:55:58.831434 | orchestrator | 2025-09-06 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:01.884280 | orchestrator | 2025-09-06 00:56:01 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:01.886196 | orchestrator | 2025-09-06 00:56:01 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:56:01.888825 | orchestrator | 2025-09-06 00:56:01 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:01.889379 | orchestrator | 2025-09-06 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:04.929235 | orchestrator | 2025-09-06 00:56:04 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:04.929755 | orchestrator | 2025-09-06 00:56:04 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:56:04.931152 | orchestrator | 2025-09-06 00:56:04 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:04.931203 | orchestrator | 2025-09-06 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:07.973226 | orchestrator | 2025-09-06 00:56:07 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:07.975063 | orchestrator | 2025-09-06 00:56:07 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:56:07.976183 | orchestrator | 2025-09-06 00:56:07 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:07.976227 | orchestrator | 2025-09-06 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:11.020534 | orchestrator | 2025-09-06 00:56:11 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:11.022718 | orchestrator | 2025-09-06 00:56:11 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state STARTED 2025-09-06 00:56:11.024175 | orchestrator | 2025-09-06 00:56:11 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:11.024204 | orchestrator | 2025-09-06 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:14.076199 | orchestrator | 2025-09-06 00:56:14 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:14.078907 | orchestrator | 2025-09-06 00:56:14 | INFO  | Task e260cd00-6d33-4b70-9abf-53dc341d3bfc is in state SUCCESS 2025-09-06 00:56:14.081915 | orchestrator | 2025-09-06 00:56:14.081961 | orchestrator | 2025-09-06 00:56:14.081974 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-06 00:56:14.081987 | orchestrator | 2025-09-06 00:56:14.081998 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-06 00:56:14.082011 | orchestrator | Saturday 06 September 2025 00:55:31 +0000 (0:00:00.158) 0:00:00.158 **** 2025-09-06 00:56:14.082077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-06 00:56:14.082091 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082103 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082115 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 00:56:14.082126 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082137 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-06 00:56:14.082148 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-06 00:56:14.082183 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-06 00:56:14.082195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-06 00:56:14.082206 | orchestrator | 2025-09-06 00:56:14.082217 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-06 00:56:14.082228 | orchestrator | Saturday 06 September 2025 00:55:35 +0000 (0:00:04.183) 0:00:04.341 **** 2025-09-06 00:56:14.082240 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-06 00:56:14.082251 | orchestrator | 2025-09-06 00:56:14.082262 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-06 00:56:14.082273 | orchestrator | Saturday 06 September 2025 00:55:36 +0000 (0:00:00.931) 0:00:05.273 **** 2025-09-06 00:56:14.082284 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-06 00:56:14.082295 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082306 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082317 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 00:56:14.082327 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082338 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-06 00:56:14.082349 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-06 00:56:14.082360 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-06 00:56:14.082370 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-06 00:56:14.082381 | orchestrator | 2025-09-06 00:56:14.082392 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-06 00:56:14.082403 | orchestrator | Saturday 06 September 2025 00:55:49 +0000 (0:00:12.687) 0:00:17.961 **** 2025-09-06 00:56:14.082414 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-06 00:56:14.082425 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082436 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082447 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 00:56:14.082458 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-06 00:56:14.082468 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-06 00:56:14.082504 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-06 00:56:14.082518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-06 00:56:14.082531 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-06 00:56:14.082544 | orchestrator | 2025-09-06 00:56:14.082557 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:56:14.082579 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:56:14.082593 | orchestrator | 2025-09-06 00:56:14.082605 | orchestrator | 2025-09-06 00:56:14.082618 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:56:14.082631 | orchestrator | Saturday 06 September 2025 00:55:56 +0000 (0:00:06.684) 0:00:24.645 **** 2025-09-06 00:56:14.082644 | orchestrator | =============================================================================== 2025-09-06 00:56:14.082657 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.69s 2025-09-06 00:56:14.082669 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.68s 2025-09-06 00:56:14.082682 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.18s 2025-09-06 00:56:14.082704 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2025-09-06 00:56:14.082717 | orchestrator | 2025-09-06 00:56:14.082730 | orchestrator | 2025-09-06 00:56:14.082743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:56:14.082755 | orchestrator | 2025-09-06 00:56:14.082782 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:56:14.082796 | orchestrator | Saturday 06 September 2025 00:54:25 +0000 (0:00:00.195) 0:00:00.195 **** 2025-09-06 00:56:14.082808 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.082822 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.082835 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.082848 | orchestrator | 2025-09-06 00:56:14.082859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:56:14.082870 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.231) 0:00:00.426 **** 2025-09-06 00:56:14.082881 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-06 00:56:14.082893 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-06 00:56:14.082904 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-06 00:56:14.082915 | orchestrator | 2025-09-06 00:56:14.082925 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-06 00:56:14.082936 | orchestrator | 2025-09-06 00:56:14.082947 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-06 00:56:14.082958 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.339) 0:00:00.765 **** 2025-09-06 00:56:14.082969 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:56:14.082980 | orchestrator | 2025-09-06 00:56:14.082991 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-06 00:56:14.083002 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.448) 0:00:01.213 **** 2025-09-06 00:56:14.083018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.083058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.083078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.083098 | orchestrator | 2025-09-06 00:56:14.083109 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-06 00:56:14.083120 | orchestrator | Saturday 06 September 2025 00:54:27 +0000 (0:00:00.917) 0:00:02.131 **** 2025-09-06 00:56:14.083131 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.083142 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.083153 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.083164 | orchestrator | 2025-09-06 00:56:14.083175 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-06 00:56:14.083186 | orchestrator | Saturday 06 September 2025 00:54:28 +0000 (0:00:00.342) 0:00:02.474 **** 2025-09-06 00:56:14.083197 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-06 00:56:14.083208 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-06 00:56:14.083224 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-06 00:56:14.083236 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-06 00:56:14.083247 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-06 00:56:14.083258 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-06 00:56:14.083269 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-06 00:56:14.083279 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-06 00:56:14.083290 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-06 00:56:14.083301 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-06 00:56:14.083312 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-06 00:56:14.083322 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-06 00:56:14.083333 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-06 00:56:14.083344 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-06 00:56:14.083355 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-06 00:56:14.083366 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-06 00:56:14.083376 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-06 00:56:14.083387 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-06 00:56:14.083398 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-06 00:56:14.083408 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-06 00:56:14.083419 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-06 00:56:14.083430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-06 00:56:14.083441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-06 00:56:14.083451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-06 00:56:14.083463 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-06 00:56:14.083520 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-06 00:56:14.083534 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-06 00:56:14.083545 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-06 00:56:14.083556 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-06 00:56:14.083567 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-06 00:56:14.083578 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-06 00:56:14.083589 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-06 00:56:14.083599 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-06 00:56:14.083615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-06 00:56:14.083627 | orchestrator | 2025-09-06 00:56:14.083638 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.083649 | orchestrator | Saturday 06 September 2025 00:54:28 +0000 (0:00:00.654) 0:00:03.129 **** 2025-09-06 00:56:14.083660 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.083671 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.083682 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.083692 | orchestrator | 2025-09-06 00:56:14.083703 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.083714 | orchestrator | Saturday 06 September 2025 00:54:28 +0000 (0:00:00.284) 0:00:03.413 **** 2025-09-06 00:56:14.083725 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.083736 | orchestrator | 2025-09-06 00:56:14.083747 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.083763 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.111) 0:00:03.525 **** 2025-09-06 00:56:14.083775 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.083786 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.083797 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.083807 | orchestrator | 2025-09-06 00:56:14.083818 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.083829 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.352) 0:00:03.877 **** 2025-09-06 00:56:14.083840 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.083851 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.083862 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.083873 | orchestrator | 2025-09-06 00:56:14.083883 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.083894 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.253) 0:00:04.131 **** 2025-09-06 00:56:14.083905 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.083916 | orchestrator | 2025-09-06 00:56:14.083927 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.083938 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.119) 0:00:04.250 **** 2025-09-06 00:56:14.083949 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.083959 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.083971 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.083989 | orchestrator | 2025-09-06 00:56:14.084000 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084011 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.251) 0:00:04.502 **** 2025-09-06 00:56:14.084022 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084033 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084044 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084054 | orchestrator | 2025-09-06 00:56:14.084065 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084076 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.262) 0:00:04.764 **** 2025-09-06 00:56:14.084087 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084098 | orchestrator | 2025-09-06 00:56:14.084109 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.084120 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.106) 0:00:04.870 **** 2025-09-06 00:56:14.084131 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084142 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.084153 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.084164 | orchestrator | 2025-09-06 00:56:14.084174 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084186 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.373) 0:00:05.244 **** 2025-09-06 00:56:14.084197 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084208 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084218 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084229 | orchestrator | 2025-09-06 00:56:14.084240 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084251 | orchestrator | Saturday 06 September 2025 00:54:31 +0000 (0:00:00.252) 0:00:05.496 **** 2025-09-06 00:56:14.084262 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084273 | orchestrator | 2025-09-06 00:56:14.084284 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.084295 | orchestrator | Saturday 06 September 2025 00:54:31 +0000 (0:00:00.109) 0:00:05.606 **** 2025-09-06 00:56:14.084305 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084316 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.084327 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.084338 | orchestrator | 2025-09-06 00:56:14.084348 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084359 | orchestrator | Saturday 06 September 2025 00:54:31 +0000 (0:00:00.273) 0:00:05.880 **** 2025-09-06 00:56:14.084370 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084381 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084392 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084403 | orchestrator | 2025-09-06 00:56:14.084413 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084424 | orchestrator | Saturday 06 September 2025 00:54:31 +0000 (0:00:00.290) 0:00:06.170 **** 2025-09-06 00:56:14.084435 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084446 | orchestrator | 2025-09-06 00:56:14.084457 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.084468 | orchestrator | Saturday 06 September 2025 00:54:32 +0000 (0:00:00.250) 0:00:06.420 **** 2025-09-06 00:56:14.084526 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084539 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.084550 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.084560 | orchestrator | 2025-09-06 00:56:14.084571 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084582 | orchestrator | Saturday 06 September 2025 00:54:32 +0000 (0:00:00.303) 0:00:06.723 **** 2025-09-06 00:56:14.084593 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084604 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084615 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084625 | orchestrator | 2025-09-06 00:56:14.084650 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084662 | orchestrator | Saturday 06 September 2025 00:54:32 +0000 (0:00:00.331) 0:00:07.055 **** 2025-09-06 00:56:14.084672 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084683 | orchestrator | 2025-09-06 00:56:14.084694 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.084705 | orchestrator | Saturday 06 September 2025 00:54:32 +0000 (0:00:00.137) 0:00:07.192 **** 2025-09-06 00:56:14.084715 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084725 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.084734 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.084744 | orchestrator | 2025-09-06 00:56:14.084753 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084763 | orchestrator | Saturday 06 September 2025 00:54:33 +0000 (0:00:00.319) 0:00:07.511 **** 2025-09-06 00:56:14.084773 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084783 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084792 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084802 | orchestrator | 2025-09-06 00:56:14.084817 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084827 | orchestrator | Saturday 06 September 2025 00:54:33 +0000 (0:00:00.574) 0:00:08.086 **** 2025-09-06 00:56:14.084837 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084847 | orchestrator | 2025-09-06 00:56:14.084856 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.084866 | orchestrator | Saturday 06 September 2025 00:54:33 +0000 (0:00:00.134) 0:00:08.220 **** 2025-09-06 00:56:14.084876 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.084885 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.084895 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.084904 | orchestrator | 2025-09-06 00:56:14.084914 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.084923 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:00.314) 0:00:08.534 **** 2025-09-06 00:56:14.084933 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.084943 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.084952 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.084962 | orchestrator | 2025-09-06 00:56:14.084971 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.084981 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:00.312) 0:00:08.847 **** 2025-09-06 00:56:14.084991 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085000 | orchestrator | 2025-09-06 00:56:14.085010 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.085020 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:00.145) 0:00:08.992 **** 2025-09-06 00:56:14.085029 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085039 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085048 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.085058 | orchestrator | 2025-09-06 00:56:14.085067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.085077 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:00.292) 0:00:09.285 **** 2025-09-06 00:56:14.085087 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.085096 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.085106 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.085116 | orchestrator | 2025-09-06 00:56:14.085125 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.085135 | orchestrator | Saturday 06 September 2025 00:54:35 +0000 (0:00:00.562) 0:00:09.847 **** 2025-09-06 00:56:14.085145 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085154 | orchestrator | 2025-09-06 00:56:14.085164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.085174 | orchestrator | Saturday 06 September 2025 00:54:35 +0000 (0:00:00.154) 0:00:10.002 **** 2025-09-06 00:56:14.085195 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085205 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085214 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.085224 | orchestrator | 2025-09-06 00:56:14.085233 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-06 00:56:14.085243 | orchestrator | Saturday 06 September 2025 00:54:35 +0000 (0:00:00.290) 0:00:10.292 **** 2025-09-06 00:56:14.085252 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:56:14.085262 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:56:14.085272 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:56:14.085281 | orchestrator | 2025-09-06 00:56:14.085291 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-06 00:56:14.085300 | orchestrator | Saturday 06 September 2025 00:54:36 +0000 (0:00:00.361) 0:00:10.654 **** 2025-09-06 00:56:14.085310 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085319 | orchestrator | 2025-09-06 00:56:14.085329 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-06 00:56:14.085338 | orchestrator | Saturday 06 September 2025 00:54:36 +0000 (0:00:00.131) 0:00:10.786 **** 2025-09-06 00:56:14.085348 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085358 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085367 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.085377 | orchestrator | 2025-09-06 00:56:14.085386 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-06 00:56:14.085396 | orchestrator | Saturday 06 September 2025 00:54:36 +0000 (0:00:00.520) 0:00:11.306 **** 2025-09-06 00:56:14.085406 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:56:14.085415 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:56:14.085425 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:56:14.085434 | orchestrator | 2025-09-06 00:56:14.085444 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-06 00:56:14.085453 | orchestrator | Saturday 06 September 2025 00:54:38 +0000 (0:00:01.732) 0:00:13.039 **** 2025-09-06 00:56:14.085463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-06 00:56:14.085473 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-06 00:56:14.085505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-06 00:56:14.085516 | orchestrator | 2025-09-06 00:56:14.085525 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-06 00:56:14.085535 | orchestrator | Saturday 06 September 2025 00:54:40 +0000 (0:00:01.840) 0:00:14.880 **** 2025-09-06 00:56:14.085545 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-06 00:56:14.085555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-06 00:56:14.085564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-06 00:56:14.085574 | orchestrator | 2025-09-06 00:56:14.085584 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-06 00:56:14.085593 | orchestrator | Saturday 06 September 2025 00:54:42 +0000 (0:00:02.240) 0:00:17.121 **** 2025-09-06 00:56:14.085608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-06 00:56:14.085618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-06 00:56:14.085628 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-06 00:56:14.085638 | orchestrator | 2025-09-06 00:56:14.085647 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-06 00:56:14.085657 | orchestrator | Saturday 06 September 2025 00:54:44 +0000 (0:00:02.282) 0:00:19.403 **** 2025-09-06 00:56:14.085673 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085683 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085693 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.085702 | orchestrator | 2025-09-06 00:56:14.085712 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-06 00:56:14.085722 | orchestrator | Saturday 06 September 2025 00:54:45 +0000 (0:00:00.321) 0:00:19.725 **** 2025-09-06 00:56:14.085731 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085741 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085751 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.085760 | orchestrator | 2025-09-06 00:56:14.085770 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-06 00:56:14.085780 | orchestrator | Saturday 06 September 2025 00:54:45 +0000 (0:00:00.294) 0:00:20.019 **** 2025-09-06 00:56:14.085789 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:56:14.085799 | orchestrator | 2025-09-06 00:56:14.085809 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-06 00:56:14.085818 | orchestrator | Saturday 06 September 2025 00:54:46 +0000 (0:00:00.563) 0:00:20.583 **** 2025-09-06 00:56:14.085834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.085854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.085879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.085890 | orchestrator | 2025-09-06 00:56:14.085900 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-06 00:56:14.085910 | orchestrator | Saturday 06 September 2025 00:54:47 +0000 (0:00:01.765) 0:00:22.348 **** 2025-09-06 00:56:14.085928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.085946 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.085962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.085984 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.085995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.086006 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.086038 | orchestrator | 2025-09-06 00:56:14.086051 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-06 00:56:14.086061 | orchestrator | Saturday 06 September 2025 00:54:48 +0000 (0:00:00.628) 0:00:22.976 **** 2025-09-06 00:56:14.086084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.086102 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.086113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.086124 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.086147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-06 00:56:14.086165 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.086174 | orchestrator | 2025-09-06 00:56:14.086184 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-06 00:56:14.086194 | orchestrator | Saturday 06 September 2025 00:54:49 +0000 (0:00:00.863) 0:00:23.840 **** 2025-09-06 00:56:14.086205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.086228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.086246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-06 00:56:14.086266 | orchestrator | 2025-09-06 00:56:14.086280 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-06 00:56:14.086290 | orchestrator | Saturday 06 September 2025 00:54:51 +0000 (0:00:01.671) 0:00:25.512 **** 2025-09-06 00:56:14.086300 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:56:14.086310 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:56:14.086320 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:56:14.086329 | orchestrator | 2025-09-06 00:56:14.086339 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-06 00:56:14.086349 | orchestrator | Saturday 06 September 2025 00:54:51 +0000 (0:00:00.304) 0:00:25.817 **** 2025-09-06 00:56:14.086359 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:56:14.086368 | orchestrator | 2025-09-06 00:56:14.086378 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-06 00:56:14.086387 | orchestrator | Saturday 06 September 2025 00:54:51 +0000 (0:00:00.529) 0:00:26.346 **** 2025-09-06 00:56:14.086397 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:56:14.086407 | orchestrator | 2025-09-06 00:56:14.086421 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-06 00:56:14.086432 | orchestrator | Saturday 06 September 2025 00:54:54 +0000 (0:00:02.263) 0:00:28.609 **** 2025-09-06 00:56:14.086441 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:56:14.086451 | orchestrator | 2025-09-06 00:56:14.086460 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-06 00:56:14.086470 | orchestrator | Saturday 06 September 2025 00:54:56 +0000 (0:00:02.755) 0:00:31.364 **** 2025-09-06 00:56:14.086493 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:56:14.086504 | orchestrator | 2025-09-06 00:56:14.086514 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-06 00:56:14.086523 | orchestrator | Saturday 06 September 2025 00:55:12 +0000 (0:00:15.614) 0:00:46.978 **** 2025-09-06 00:56:14.086533 | orchestrator | 2025-09-06 00:56:14.086543 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-06 00:56:14.086552 | orchestrator | Saturday 06 September 2025 00:55:12 +0000 (0:00:00.061) 0:00:47.040 **** 2025-09-06 00:56:14.086562 | orchestrator | 2025-09-06 00:56:14.086572 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-06 00:56:14.086581 | orchestrator | Saturday 06 September 2025 00:55:12 +0000 (0:00:00.056) 0:00:47.096 **** 2025-09-06 00:56:14.086591 | orchestrator | 2025-09-06 00:56:14.086601 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-06 00:56:14.086610 | orchestrator | Saturday 06 September 2025 00:55:12 +0000 (0:00:00.061) 0:00:47.158 **** 2025-09-06 00:56:14.086620 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:56:14.086630 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:56:14.086639 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:56:14.086649 | orchestrator | 2025-09-06 00:56:14.086659 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:56:14.086669 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-06 00:56:14.086678 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-06 00:56:14.086688 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-06 00:56:14.086698 | orchestrator | 2025-09-06 00:56:14.086708 | orchestrator | 2025-09-06 00:56:14.086718 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:56:14.086727 | orchestrator | Saturday 06 September 2025 00:56:12 +0000 (0:00:59.359) 0:01:46.517 **** 2025-09-06 00:56:14.086737 | orchestrator | =============================================================================== 2025-09-06 00:56:14.086752 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.36s 2025-09-06 00:56:14.086762 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.61s 2025-09-06 00:56:14.086772 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.76s 2025-09-06 00:56:14.086781 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.28s 2025-09-06 00:56:14.086791 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2025-09-06 00:56:14.086800 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.24s 2025-09-06 00:56:14.086810 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2025-09-06 00:56:14.086819 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.77s 2025-09-06 00:56:14.086829 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.73s 2025-09-06 00:56:14.086838 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.67s 2025-09-06 00:56:14.086848 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.92s 2025-09-06 00:56:14.086857 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2025-09-06 00:56:14.086867 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2025-09-06 00:56:14.086877 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2025-09-06 00:56:14.086886 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-09-06 00:56:14.086896 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-06 00:56:14.086905 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-09-06 00:56:14.086919 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-06 00:56:14.086929 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-09-06 00:56:14.086939 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.45s 2025-09-06 00:56:14.086948 | orchestrator | 2025-09-06 00:56:14 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:14.086958 | orchestrator | 2025-09-06 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:17.127263 | orchestrator | 2025-09-06 00:56:17 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:17.128788 | orchestrator | 2025-09-06 00:56:17 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:17.128931 | orchestrator | 2025-09-06 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:20.165999 | orchestrator | 2025-09-06 00:56:20 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:20.167757 | orchestrator | 2025-09-06 00:56:20 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:20.168083 | orchestrator | 2025-09-06 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:23.211918 | orchestrator | 2025-09-06 00:56:23 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:23.213368 | orchestrator | 2025-09-06 00:56:23 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:23.213878 | orchestrator | 2025-09-06 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:26.254564 | orchestrator | 2025-09-06 00:56:26 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:26.256066 | orchestrator | 2025-09-06 00:56:26 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:26.256104 | orchestrator | 2025-09-06 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:29.300765 | orchestrator | 2025-09-06 00:56:29 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:29.302188 | orchestrator | 2025-09-06 00:56:29 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:29.302313 | orchestrator | 2025-09-06 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:32.349748 | orchestrator | 2025-09-06 00:56:32 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:32.351289 | orchestrator | 2025-09-06 00:56:32 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:32.351331 | orchestrator | 2025-09-06 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:35.393828 | orchestrator | 2025-09-06 00:56:35 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:35.395306 | orchestrator | 2025-09-06 00:56:35 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:35.395831 | orchestrator | 2025-09-06 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:38.439240 | orchestrator | 2025-09-06 00:56:38 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:38.440887 | orchestrator | 2025-09-06 00:56:38 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:38.440918 | orchestrator | 2025-09-06 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:41.483110 | orchestrator | 2025-09-06 00:56:41 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:41.483877 | orchestrator | 2025-09-06 00:56:41 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:41.483918 | orchestrator | 2025-09-06 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:44.531520 | orchestrator | 2025-09-06 00:56:44 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:44.532559 | orchestrator | 2025-09-06 00:56:44 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:44.532599 | orchestrator | 2025-09-06 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:47.576775 | orchestrator | 2025-09-06 00:56:47 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:47.577264 | orchestrator | 2025-09-06 00:56:47 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:47.577295 | orchestrator | 2025-09-06 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:50.621750 | orchestrator | 2025-09-06 00:56:50 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state STARTED 2025-09-06 00:56:50.623611 | orchestrator | 2025-09-06 00:56:50 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:50.624061 | orchestrator | 2025-09-06 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:53.680843 | orchestrator | 2025-09-06 00:56:53 | INFO  | Task ec539655-77f8-496e-9d3a-eebdc1d79342 is in state SUCCESS 2025-09-06 00:56:53.681837 | orchestrator | 2025-09-06 00:56:53 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:56:53.683251 | orchestrator | 2025-09-06 00:56:53 | INFO  | Task ab888355-ab74-4103-90e1-8332feb38b61 is in state STARTED 2025-09-06 00:56:53.684609 | orchestrator | 2025-09-06 00:56:53 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:53.685838 | orchestrator | 2025-09-06 00:56:53 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:56:53.686105 | orchestrator | 2025-09-06 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:56.729354 | orchestrator | 2025-09-06 00:56:56 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:56:56.729574 | orchestrator | 2025-09-06 00:56:56 | INFO  | Task ab888355-ab74-4103-90e1-8332feb38b61 is in state SUCCESS 2025-09-06 00:56:56.729607 | orchestrator | 2025-09-06 00:56:56 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:56.730404 | orchestrator | 2025-09-06 00:56:56 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:56:56.730444 | orchestrator | 2025-09-06 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:56:59.767119 | orchestrator | 2025-09-06 00:56:59 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:56:59.767197 | orchestrator | 2025-09-06 00:56:59 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:56:59.767668 | orchestrator | 2025-09-06 00:56:59 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:56:59.768168 | orchestrator | 2025-09-06 00:56:59 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:56:59.769601 | orchestrator | 2025-09-06 00:56:59 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:56:59.769628 | orchestrator | 2025-09-06 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:02.820171 | orchestrator | 2025-09-06 00:57:02 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:02.820260 | orchestrator | 2025-09-06 00:57:02 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:02.820274 | orchestrator | 2025-09-06 00:57:02 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:02.820286 | orchestrator | 2025-09-06 00:57:02 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:57:02.820297 | orchestrator | 2025-09-06 00:57:02 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:02.820308 | orchestrator | 2025-09-06 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:05.829808 | orchestrator | 2025-09-06 00:57:05 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:05.830336 | orchestrator | 2025-09-06 00:57:05 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:05.831370 | orchestrator | 2025-09-06 00:57:05 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:05.833120 | orchestrator | 2025-09-06 00:57:05 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state STARTED 2025-09-06 00:57:05.835062 | orchestrator | 2025-09-06 00:57:05 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:05.835156 | orchestrator | 2025-09-06 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:08.878983 | orchestrator | 2025-09-06 00:57:08 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:08.902687 | orchestrator | 2025-09-06 00:57:08 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:08.903103 | orchestrator | 2025-09-06 00:57:08 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:08.905661 | orchestrator | 2025-09-06 00:57:08 | INFO  | Task 4e2cffcd-d657-49ba-943d-9ccd011131b9 is in state SUCCESS 2025-09-06 00:57:08.906876 | orchestrator | 2025-09-06 00:57:08.906910 | orchestrator | 2025-09-06 00:57:08.907088 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-06 00:57:08.907527 | orchestrator | 2025-09-06 00:57:08.907541 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-06 00:57:08.907552 | orchestrator | Saturday 06 September 2025 00:56:00 +0000 (0:00:00.235) 0:00:00.235 **** 2025-09-06 00:57:08.907565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-06 00:57:08.907577 | orchestrator | 2025-09-06 00:57:08.907588 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-06 00:57:08.907599 | orchestrator | Saturday 06 September 2025 00:56:00 +0000 (0:00:00.215) 0:00:00.450 **** 2025-09-06 00:57:08.907610 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-06 00:57:08.907622 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-06 00:57:08.907633 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-06 00:57:08.907645 | orchestrator | 2025-09-06 00:57:08.907656 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-06 00:57:08.907666 | orchestrator | Saturday 06 September 2025 00:56:01 +0000 (0:00:01.225) 0:00:01.675 **** 2025-09-06 00:57:08.907678 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-06 00:57:08.907688 | orchestrator | 2025-09-06 00:57:08.907699 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-06 00:57:08.907711 | orchestrator | Saturday 06 September 2025 00:56:03 +0000 (0:00:01.093) 0:00:02.768 **** 2025-09-06 00:57:08.907722 | orchestrator | changed: [testbed-manager] 2025-09-06 00:57:08.907733 | orchestrator | 2025-09-06 00:57:08.907744 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-06 00:57:08.907755 | orchestrator | Saturday 06 September 2025 00:56:04 +0000 (0:00:00.940) 0:00:03.709 **** 2025-09-06 00:57:08.907766 | orchestrator | changed: [testbed-manager] 2025-09-06 00:57:08.907777 | orchestrator | 2025-09-06 00:57:08.907788 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-06 00:57:08.907799 | orchestrator | Saturday 06 September 2025 00:56:04 +0000 (0:00:00.891) 0:00:04.600 **** 2025-09-06 00:57:08.907809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-06 00:57:08.907820 | orchestrator | ok: [testbed-manager] 2025-09-06 00:57:08.907831 | orchestrator | 2025-09-06 00:57:08.907842 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-06 00:57:08.907853 | orchestrator | Saturday 06 September 2025 00:56:41 +0000 (0:00:36.544) 0:00:41.144 **** 2025-09-06 00:57:08.907864 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-06 00:57:08.907875 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-06 00:57:08.907886 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-06 00:57:08.907897 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-06 00:57:08.907908 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-06 00:57:08.907920 | orchestrator | 2025-09-06 00:57:08.907939 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-06 00:57:08.907956 | orchestrator | Saturday 06 September 2025 00:56:45 +0000 (0:00:04.054) 0:00:45.199 **** 2025-09-06 00:57:08.907972 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-06 00:57:08.907990 | orchestrator | 2025-09-06 00:57:08.908009 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-06 00:57:08.908026 | orchestrator | Saturday 06 September 2025 00:56:45 +0000 (0:00:00.442) 0:00:45.642 **** 2025-09-06 00:57:08.908046 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:57:08.908065 | orchestrator | 2025-09-06 00:57:08.908084 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-06 00:57:08.908115 | orchestrator | Saturday 06 September 2025 00:56:46 +0000 (0:00:00.129) 0:00:45.772 **** 2025-09-06 00:57:08.908128 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:57:08.908142 | orchestrator | 2025-09-06 00:57:08.908156 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-06 00:57:08.908169 | orchestrator | Saturday 06 September 2025 00:56:46 +0000 (0:00:00.298) 0:00:46.070 **** 2025-09-06 00:57:08.908182 | orchestrator | changed: [testbed-manager] 2025-09-06 00:57:08.908195 | orchestrator | 2025-09-06 00:57:08.908208 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-06 00:57:08.908221 | orchestrator | Saturday 06 September 2025 00:56:48 +0000 (0:00:01.956) 0:00:48.027 **** 2025-09-06 00:57:08.908234 | orchestrator | changed: [testbed-manager] 2025-09-06 00:57:08.908247 | orchestrator | 2025-09-06 00:57:08.908260 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-06 00:57:08.908273 | orchestrator | Saturday 06 September 2025 00:56:49 +0000 (0:00:00.730) 0:00:48.757 **** 2025-09-06 00:57:08.908285 | orchestrator | changed: [testbed-manager] 2025-09-06 00:57:08.908298 | orchestrator | 2025-09-06 00:57:08.908311 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-06 00:57:08.908324 | orchestrator | Saturday 06 September 2025 00:56:49 +0000 (0:00:00.653) 0:00:49.411 **** 2025-09-06 00:57:08.908337 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-06 00:57:08.908350 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-06 00:57:08.908363 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-06 00:57:08.908376 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-06 00:57:08.908389 | orchestrator | 2025-09-06 00:57:08.908401 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:57:08.908433 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-06 00:57:08.908448 | orchestrator | 2025-09-06 00:57:08.908461 | orchestrator | 2025-09-06 00:57:08.908528 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:57:08.908542 | orchestrator | Saturday 06 September 2025 00:56:51 +0000 (0:00:01.445) 0:00:50.856 **** 2025-09-06 00:57:08.908553 | orchestrator | =============================================================================== 2025-09-06 00:57:08.908564 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.54s 2025-09-06 00:57:08.908575 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.05s 2025-09-06 00:57:08.908586 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.96s 2025-09-06 00:57:08.908597 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-09-06 00:57:08.908608 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.23s 2025-09-06 00:57:08.908620 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.09s 2025-09-06 00:57:08.908630 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2025-09-06 00:57:08.908641 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2025-09-06 00:57:08.908652 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2025-09-06 00:57:08.908663 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-09-06 00:57:08.908675 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-09-06 00:57:08.908686 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-09-06 00:57:08.908696 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-09-06 00:57:08.908707 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-06 00:57:08.908718 | orchestrator | 2025-09-06 00:57:08.908729 | orchestrator | 2025-09-06 00:57:08.908741 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:57:08.908760 | orchestrator | 2025-09-06 00:57:08.908772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:57:08.908783 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:00.157) 0:00:00.157 **** 2025-09-06 00:57:08.908794 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.908805 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.908816 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.908827 | orchestrator | 2025-09-06 00:57:08.908838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:57:08.908849 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:00.233) 0:00:00.391 **** 2025-09-06 00:57:08.908860 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-06 00:57:08.908872 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-06 00:57:08.908883 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-06 00:57:08.908894 | orchestrator | 2025-09-06 00:57:08.908905 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-06 00:57:08.908916 | orchestrator | 2025-09-06 00:57:08.908927 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-06 00:57:08.908938 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:00.534) 0:00:00.925 **** 2025-09-06 00:57:08.908949 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.908960 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.908971 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.908982 | orchestrator | 2025-09-06 00:57:08.908993 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:57:08.909005 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:57:08.909016 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:57:08.909028 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:57:08.909039 | orchestrator | 2025-09-06 00:57:08.909050 | orchestrator | 2025-09-06 00:57:08.909061 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:57:08.909072 | orchestrator | Saturday 06 September 2025 00:56:56 +0000 (0:00:00.575) 0:00:01.501 **** 2025-09-06 00:57:08.909083 | orchestrator | =============================================================================== 2025-09-06 00:57:08.909094 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.58s 2025-09-06 00:57:08.909105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-06 00:57:08.909116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-09-06 00:57:08.909127 | orchestrator | 2025-09-06 00:57:08.909138 | orchestrator | 2025-09-06 00:57:08.909149 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:57:08.909160 | orchestrator | 2025-09-06 00:57:08.909171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:57:08.909182 | orchestrator | Saturday 06 September 2025 00:54:25 +0000 (0:00:00.237) 0:00:00.237 **** 2025-09-06 00:57:08.909193 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.909204 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.909216 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.909227 | orchestrator | 2025-09-06 00:57:08.909238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:57:08.909249 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.257) 0:00:00.495 **** 2025-09-06 00:57:08.909260 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-06 00:57:08.909271 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-06 00:57:08.909283 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-06 00:57:08.909294 | orchestrator | 2025-09-06 00:57:08.909316 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-06 00:57:08.909327 | orchestrator | 2025-09-06 00:57:08.909369 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.909382 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.367) 0:00:00.863 **** 2025-09-06 00:57:08.909393 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:57:08.909405 | orchestrator | 2025-09-06 00:57:08.909470 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-06 00:57:08.909482 | orchestrator | Saturday 06 September 2025 00:54:26 +0000 (0:00:00.476) 0:00:01.339 **** 2025-09-06 00:57:08.909498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909671 | orchestrator | 2025-09-06 00:57:08.909682 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-06 00:57:08.909693 | orchestrator | Saturday 06 September 2025 00:54:28 +0000 (0:00:01.739) 0:00:03.079 **** 2025-09-06 00:57:08.909704 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-06 00:57:08.909715 | orchestrator | 2025-09-06 00:57:08.909726 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-06 00:57:08.909744 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.724) 0:00:03.803 **** 2025-09-06 00:57:08.909755 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.909766 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.909777 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.909788 | orchestrator | 2025-09-06 00:57:08.909799 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-06 00:57:08.909810 | orchestrator | Saturday 06 September 2025 00:54:29 +0000 (0:00:00.382) 0:00:04.186 **** 2025-09-06 00:57:08.909821 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 00:57:08.909832 | orchestrator | 2025-09-06 00:57:08.909842 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.909853 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.605) 0:00:04.791 **** 2025-09-06 00:57:08.909867 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:57:08.909877 | orchestrator | 2025-09-06 00:57:08.909892 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-06 00:57:08.909902 | orchestrator | Saturday 06 September 2025 00:54:30 +0000 (0:00:00.474) 0:00:05.265 **** 2025-09-06 00:57:08.909913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.909953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.909997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910081 | orchestrator | 2025-09-06 00:57:08.910091 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-06 00:57:08.910101 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:03.156) 0:00:08.421 **** 2025-09-06 00:57:08.910112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910155 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.910166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910203 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.910224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910256 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.910266 | orchestrator | 2025-09-06 00:57:08.910276 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-06 00:57:08.910285 | orchestrator | Saturday 06 September 2025 00:54:34 +0000 (0:00:00.875) 0:00:09.297 **** 2025-09-06 00:57:08.910296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910332 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.910354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910391 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.910401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-06 00:57:08.910428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-06 00:57:08.910460 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.910470 | orchestrator | 2025-09-06 00:57:08.910479 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-06 00:57:08.910489 | orchestrator | Saturday 06 September 2025 00:54:35 +0000 (0:00:00.786) 0:00:10.084 **** 2025-09-06 00:57:08.910500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910615 | orchestrator | 2025-09-06 00:57:08.910625 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-06 00:57:08.910635 | orchestrator | Saturday 06 September 2025 00:54:39 +0000 (0:00:03.283) 0:00:13.367 **** 2025-09-06 00:57:08.910655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.910719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.910736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.910772 | orchestrator | 2025-09-06 00:57:08.910782 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-06 00:57:08.910792 | orchestrator | Saturday 06 September 2025 00:54:44 +0000 (0:00:05.459) 0:00:18.827 **** 2025-09-06 00:57:08.910802 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.910812 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:57:08.910821 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:57:08.910831 | orchestrator | 2025-09-06 00:57:08.910840 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-06 00:57:08.910850 | orchestrator | Saturday 06 September 2025 00:54:46 +0000 (0:00:01.555) 0:00:20.382 **** 2025-09-06 00:57:08.910859 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.910869 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.910879 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.910888 | orchestrator | 2025-09-06 00:57:08.910898 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-06 00:57:08.910907 | orchestrator | Saturday 06 September 2025 00:54:46 +0000 (0:00:00.520) 0:00:20.903 **** 2025-09-06 00:57:08.910917 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.910927 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.910936 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.910946 | orchestrator | 2025-09-06 00:57:08.910955 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-06 00:57:08.910965 | orchestrator | Saturday 06 September 2025 00:54:46 +0000 (0:00:00.328) 0:00:21.231 **** 2025-09-06 00:57:08.910974 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.910984 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.910993 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.911003 | orchestrator | 2025-09-06 00:57:08.911012 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-06 00:57:08.911022 | orchestrator | Saturday 06 September 2025 00:54:47 +0000 (0:00:00.536) 0:00:21.768 **** 2025-09-06 00:57:08.911039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.911072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.911094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-06 00:57:08.911126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911162 | orchestrator | 2025-09-06 00:57:08.911171 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.911181 | orchestrator | Saturday 06 September 2025 00:54:49 +0000 (0:00:02.325) 0:00:24.093 **** 2025-09-06 00:57:08.911191 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.911201 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.911210 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.911220 | orchestrator | 2025-09-06 00:57:08.911229 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-06 00:57:08.911239 | orchestrator | Saturday 06 September 2025 00:54:50 +0000 (0:00:00.342) 0:00:24.435 **** 2025-09-06 00:57:08.911249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-06 00:57:08.911259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-06 00:57:08.911268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-06 00:57:08.911278 | orchestrator | 2025-09-06 00:57:08.911288 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-06 00:57:08.911297 | orchestrator | Saturday 06 September 2025 00:54:51 +0000 (0:00:01.547) 0:00:25.983 **** 2025-09-06 00:57:08.911307 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 00:57:08.911316 | orchestrator | 2025-09-06 00:57:08.911326 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-06 00:57:08.911336 | orchestrator | Saturday 06 September 2025 00:54:52 +0000 (0:00:00.857) 0:00:26.840 **** 2025-09-06 00:57:08.911345 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.911354 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.911364 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.911373 | orchestrator | 2025-09-06 00:57:08.911383 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-06 00:57:08.911393 | orchestrator | Saturday 06 September 2025 00:54:53 +0000 (0:00:00.820) 0:00:27.660 **** 2025-09-06 00:57:08.911402 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-06 00:57:08.911424 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-06 00:57:08.911435 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 00:57:08.911444 | orchestrator | 2025-09-06 00:57:08.911454 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-06 00:57:08.911463 | orchestrator | Saturday 06 September 2025 00:54:54 +0000 (0:00:01.008) 0:00:28.668 **** 2025-09-06 00:57:08.911478 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.911488 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.911497 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.911507 | orchestrator | 2025-09-06 00:57:08.911516 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-06 00:57:08.911526 | orchestrator | Saturday 06 September 2025 00:54:54 +0000 (0:00:00.316) 0:00:28.985 **** 2025-09-06 00:57:08.911535 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-06 00:57:08.911545 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-06 00:57:08.911554 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-06 00:57:08.911564 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-06 00:57:08.911574 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-06 00:57:08.911592 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-06 00:57:08.911603 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-06 00:57:08.911612 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-06 00:57:08.911622 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-06 00:57:08.911632 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-06 00:57:08.911641 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-06 00:57:08.911651 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-06 00:57:08.911660 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-06 00:57:08.911670 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-06 00:57:08.911680 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-06 00:57:08.911689 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 00:57:08.911699 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 00:57:08.911708 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 00:57:08.911718 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 00:57:08.911728 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 00:57:08.911737 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 00:57:08.911747 | orchestrator | 2025-09-06 00:57:08.911756 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-06 00:57:08.911766 | orchestrator | Saturday 06 September 2025 00:55:03 +0000 (0:00:09.137) 0:00:38.123 **** 2025-09-06 00:57:08.911775 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 00:57:08.911785 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 00:57:08.911795 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 00:57:08.911804 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 00:57:08.911814 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 00:57:08.911823 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 00:57:08.911837 | orchestrator | 2025-09-06 00:57:08.911847 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-06 00:57:08.911857 | orchestrator | Saturday 06 September 2025 00:55:06 +0000 (0:00:03.009) 0:00:41.132 **** 2025-09-06 00:57:08.911867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-06 00:57:08.911912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-06 00:57:08.911987 | orchestrator | 2025-09-06 00:57:08.911997 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.912007 | orchestrator | Saturday 06 September 2025 00:55:09 +0000 (0:00:02.230) 0:00:43.362 **** 2025-09-06 00:57:08.912017 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.912026 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.912036 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.912046 | orchestrator | 2025-09-06 00:57:08.912055 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-06 00:57:08.912065 | orchestrator | Saturday 06 September 2025 00:55:09 +0000 (0:00:00.265) 0:00:43.628 **** 2025-09-06 00:57:08.912074 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912084 | orchestrator | 2025-09-06 00:57:08.912093 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-06 00:57:08.912108 | orchestrator | Saturday 06 September 2025 00:55:11 +0000 (0:00:02.202) 0:00:45.830 **** 2025-09-06 00:57:08.912117 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912127 | orchestrator | 2025-09-06 00:57:08.912137 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-06 00:57:08.912146 | orchestrator | Saturday 06 September 2025 00:55:13 +0000 (0:00:02.089) 0:00:47.920 **** 2025-09-06 00:57:08.912156 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.912165 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.912175 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.912184 | orchestrator | 2025-09-06 00:57:08.912194 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-06 00:57:08.912204 | orchestrator | Saturday 06 September 2025 00:55:14 +0000 (0:00:00.880) 0:00:48.801 **** 2025-09-06 00:57:08.912213 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.912223 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.912232 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.912242 | orchestrator | 2025-09-06 00:57:08.912251 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-06 00:57:08.912261 | orchestrator | Saturday 06 September 2025 00:55:14 +0000 (0:00:00.446) 0:00:49.248 **** 2025-09-06 00:57:08.912271 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.912280 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.912290 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.912300 | orchestrator | 2025-09-06 00:57:08.912309 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-06 00:57:08.912319 | orchestrator | Saturday 06 September 2025 00:55:15 +0000 (0:00:00.281) 0:00:49.530 **** 2025-09-06 00:57:08.912328 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912338 | orchestrator | 2025-09-06 00:57:08.912347 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-06 00:57:08.912357 | orchestrator | Saturday 06 September 2025 00:55:28 +0000 (0:00:13.575) 0:01:03.106 **** 2025-09-06 00:57:08.912366 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912376 | orchestrator | 2025-09-06 00:57:08.912385 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-06 00:57:08.912395 | orchestrator | Saturday 06 September 2025 00:55:38 +0000 (0:00:09.801) 0:01:12.907 **** 2025-09-06 00:57:08.912405 | orchestrator | 2025-09-06 00:57:08.912427 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-06 00:57:08.912437 | orchestrator | Saturday 06 September 2025 00:55:38 +0000 (0:00:00.077) 0:01:12.985 **** 2025-09-06 00:57:08.912446 | orchestrator | 2025-09-06 00:57:08.912456 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-06 00:57:08.912466 | orchestrator | Saturday 06 September 2025 00:55:38 +0000 (0:00:00.094) 0:01:13.079 **** 2025-09-06 00:57:08.912475 | orchestrator | 2025-09-06 00:57:08.912485 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-06 00:57:08.912494 | orchestrator | Saturday 06 September 2025 00:55:38 +0000 (0:00:00.071) 0:01:13.150 **** 2025-09-06 00:57:08.912504 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912513 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:57:08.912523 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:57:08.912532 | orchestrator | 2025-09-06 00:57:08.912542 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-06 00:57:08.912552 | orchestrator | Saturday 06 September 2025 00:56:02 +0000 (0:00:23.689) 0:01:36.840 **** 2025-09-06 00:57:08.912561 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912571 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:57:08.912580 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:57:08.912590 | orchestrator | 2025-09-06 00:57:08.912599 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-06 00:57:08.912609 | orchestrator | Saturday 06 September 2025 00:56:07 +0000 (0:00:04.977) 0:01:41.818 **** 2025-09-06 00:57:08.912623 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912638 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:57:08.912652 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:57:08.912662 | orchestrator | 2025-09-06 00:57:08.912672 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.912682 | orchestrator | Saturday 06 September 2025 00:56:19 +0000 (0:00:11.899) 0:01:53.717 **** 2025-09-06 00:57:08.912691 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:57:08.912701 | orchestrator | 2025-09-06 00:57:08.912710 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-06 00:57:08.912720 | orchestrator | Saturday 06 September 2025 00:56:20 +0000 (0:00:00.711) 0:01:54.429 **** 2025-09-06 00:57:08.912730 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.912739 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:57:08.912749 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:57:08.912758 | orchestrator | 2025-09-06 00:57:08.912768 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-06 00:57:08.912777 | orchestrator | Saturday 06 September 2025 00:56:20 +0000 (0:00:00.881) 0:01:55.311 **** 2025-09-06 00:57:08.912787 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:57:08.912797 | orchestrator | 2025-09-06 00:57:08.912806 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-06 00:57:08.912816 | orchestrator | Saturday 06 September 2025 00:56:22 +0000 (0:00:01.854) 0:01:57.165 **** 2025-09-06 00:57:08.912826 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-06 00:57:08.912835 | orchestrator | 2025-09-06 00:57:08.912845 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-06 00:57:08.912855 | orchestrator | Saturday 06 September 2025 00:56:33 +0000 (0:00:10.995) 0:02:08.161 **** 2025-09-06 00:57:08.912864 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-06 00:57:08.912874 | orchestrator | 2025-09-06 00:57:08.912884 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-06 00:57:08.912893 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:21.412) 0:02:29.574 **** 2025-09-06 00:57:08.912903 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-06 00:57:08.912912 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-06 00:57:08.912922 | orchestrator | 2025-09-06 00:57:08.912931 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-06 00:57:08.912941 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:07.553) 0:02:37.128 **** 2025-09-06 00:57:08.912951 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.912960 | orchestrator | 2025-09-06 00:57:08.912970 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-06 00:57:08.912980 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:00.194) 0:02:37.322 **** 2025-09-06 00:57:08.912989 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.912999 | orchestrator | 2025-09-06 00:57:08.913009 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-06 00:57:08.913018 | orchestrator | Saturday 06 September 2025 00:57:03 +0000 (0:00:00.344) 0:02:37.667 **** 2025-09-06 00:57:08.913028 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.913037 | orchestrator | 2025-09-06 00:57:08.913047 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-06 00:57:08.913056 | orchestrator | Saturday 06 September 2025 00:57:03 +0000 (0:00:00.346) 0:02:38.013 **** 2025-09-06 00:57:08.913066 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.913076 | orchestrator | 2025-09-06 00:57:08.913085 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-06 00:57:08.913095 | orchestrator | Saturday 06 September 2025 00:57:04 +0000 (0:00:00.704) 0:02:38.718 **** 2025-09-06 00:57:08.913105 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:57:08.913115 | orchestrator | 2025-09-06 00:57:08.913129 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-06 00:57:08.913139 | orchestrator | Saturday 06 September 2025 00:57:07 +0000 (0:00:03.512) 0:02:42.230 **** 2025-09-06 00:57:08.913149 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:57:08.913159 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:57:08.913168 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:57:08.913178 | orchestrator | 2025-09-06 00:57:08.913187 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:57:08.913197 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-06 00:57:08.913207 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-06 00:57:08.913217 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-06 00:57:08.913227 | orchestrator | 2025-09-06 00:57:08.913236 | orchestrator | 2025-09-06 00:57:08.913246 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:57:08.913255 | orchestrator | Saturday 06 September 2025 00:57:08 +0000 (0:00:00.329) 0:02:42.560 **** 2025-09-06 00:57:08.913265 | orchestrator | =============================================================================== 2025-09-06 00:57:08.913275 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.69s 2025-09-06 00:57:08.913284 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.41s 2025-09-06 00:57:08.913294 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.58s 2025-09-06 00:57:08.913304 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.90s 2025-09-06 00:57:08.913318 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.00s 2025-09-06 00:57:08.913332 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.80s 2025-09-06 00:57:08.913342 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.14s 2025-09-06 00:57:08.913352 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.55s 2025-09-06 00:57:08.913362 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.46s 2025-09-06 00:57:08.913371 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.98s 2025-09-06 00:57:08.913381 | orchestrator | keystone : Creating default user role ----------------------------------- 3.51s 2025-09-06 00:57:08.913390 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.28s 2025-09-06 00:57:08.913400 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2025-09-06 00:57:08.913430 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.01s 2025-09-06 00:57:08.913440 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.33s 2025-09-06 00:57:08.913450 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2025-09-06 00:57:08.913460 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2025-09-06 00:57:08.913469 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.09s 2025-09-06 00:57:08.913479 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2025-09-06 00:57:08.913488 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.74s 2025-09-06 00:57:08.913498 | orchestrator | 2025-09-06 00:57:08 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:08.913508 | orchestrator | 2025-09-06 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:11.936344 | orchestrator | 2025-09-06 00:57:11 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:11.936479 | orchestrator | 2025-09-06 00:57:11 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:11.937025 | orchestrator | 2025-09-06 00:57:11 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:11.937496 | orchestrator | 2025-09-06 00:57:11 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:11.937940 | orchestrator | 2025-09-06 00:57:11 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:11.937960 | orchestrator | 2025-09-06 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:14.964799 | orchestrator | 2025-09-06 00:57:14 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:14.965149 | orchestrator | 2025-09-06 00:57:14 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:14.965647 | orchestrator | 2025-09-06 00:57:14 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:14.966271 | orchestrator | 2025-09-06 00:57:14 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:14.967301 | orchestrator | 2025-09-06 00:57:14 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:14.967330 | orchestrator | 2025-09-06 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:18.006361 | orchestrator | 2025-09-06 00:57:18 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:18.007016 | orchestrator | 2025-09-06 00:57:18 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:18.007929 | orchestrator | 2025-09-06 00:57:18 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:18.008933 | orchestrator | 2025-09-06 00:57:18 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:18.011094 | orchestrator | 2025-09-06 00:57:18 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:18.011127 | orchestrator | 2025-09-06 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:21.055778 | orchestrator | 2025-09-06 00:57:21 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:21.056224 | orchestrator | 2025-09-06 00:57:21 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:21.057537 | orchestrator | 2025-09-06 00:57:21 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:21.060383 | orchestrator | 2025-09-06 00:57:21 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:21.061561 | orchestrator | 2025-09-06 00:57:21 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:21.061580 | orchestrator | 2025-09-06 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:24.090683 | orchestrator | 2025-09-06 00:57:24 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:24.090982 | orchestrator | 2025-09-06 00:57:24 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:24.091587 | orchestrator | 2025-09-06 00:57:24 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:24.092213 | orchestrator | 2025-09-06 00:57:24 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:24.092818 | orchestrator | 2025-09-06 00:57:24 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:24.092945 | orchestrator | 2025-09-06 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:27.132961 | orchestrator | 2025-09-06 00:57:27 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:27.133047 | orchestrator | 2025-09-06 00:57:27 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:27.133061 | orchestrator | 2025-09-06 00:57:27 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:27.133072 | orchestrator | 2025-09-06 00:57:27 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:27.133082 | orchestrator | 2025-09-06 00:57:27 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:27.133091 | orchestrator | 2025-09-06 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:30.143559 | orchestrator | 2025-09-06 00:57:30 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:30.144213 | orchestrator | 2025-09-06 00:57:30 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:30.148273 | orchestrator | 2025-09-06 00:57:30 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:30.150185 | orchestrator | 2025-09-06 00:57:30 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state STARTED 2025-09-06 00:57:30.152132 | orchestrator | 2025-09-06 00:57:30 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:30.152456 | orchestrator | 2025-09-06 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:33.410156 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:33.410221 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:33.410229 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:33.410235 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:33.410241 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task 5c26d474-ffb5-4ef1-a7d1-c1a764fd65f5 is in state SUCCESS 2025-09-06 00:57:33.410247 | orchestrator | 2025-09-06 00:57:33 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:33.410253 | orchestrator | 2025-09-06 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:36.219599 | orchestrator | 2025-09-06 00:57:36 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:36.220970 | orchestrator | 2025-09-06 00:57:36 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:36.222551 | orchestrator | 2025-09-06 00:57:36 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:36.224092 | orchestrator | 2025-09-06 00:57:36 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:36.225282 | orchestrator | 2025-09-06 00:57:36 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:36.225301 | orchestrator | 2025-09-06 00:57:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:39.256973 | orchestrator | 2025-09-06 00:57:39 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:39.257446 | orchestrator | 2025-09-06 00:57:39 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:39.257610 | orchestrator | 2025-09-06 00:57:39 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:39.258313 | orchestrator | 2025-09-06 00:57:39 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:39.259074 | orchestrator | 2025-09-06 00:57:39 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:39.259092 | orchestrator | 2025-09-06 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:42.287887 | orchestrator | 2025-09-06 00:57:42 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:42.287978 | orchestrator | 2025-09-06 00:57:42 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:42.288889 | orchestrator | 2025-09-06 00:57:42 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:42.289339 | orchestrator | 2025-09-06 00:57:42 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:42.290095 | orchestrator | 2025-09-06 00:57:42 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:42.290119 | orchestrator | 2025-09-06 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:45.318975 | orchestrator | 2025-09-06 00:57:45 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:45.319546 | orchestrator | 2025-09-06 00:57:45 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:45.320192 | orchestrator | 2025-09-06 00:57:45 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:45.320907 | orchestrator | 2025-09-06 00:57:45 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:45.322519 | orchestrator | 2025-09-06 00:57:45 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:45.322543 | orchestrator | 2025-09-06 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:48.350565 | orchestrator | 2025-09-06 00:57:48 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:48.350651 | orchestrator | 2025-09-06 00:57:48 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:48.350995 | orchestrator | 2025-09-06 00:57:48 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:48.351524 | orchestrator | 2025-09-06 00:57:48 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:48.352045 | orchestrator | 2025-09-06 00:57:48 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:48.352066 | orchestrator | 2025-09-06 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:51.380804 | orchestrator | 2025-09-06 00:57:51 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:51.381847 | orchestrator | 2025-09-06 00:57:51 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:51.382299 | orchestrator | 2025-09-06 00:57:51 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:51.383147 | orchestrator | 2025-09-06 00:57:51 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:51.384402 | orchestrator | 2025-09-06 00:57:51 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:51.384426 | orchestrator | 2025-09-06 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:54.414848 | orchestrator | 2025-09-06 00:57:54 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:54.414965 | orchestrator | 2025-09-06 00:57:54 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:54.415207 | orchestrator | 2025-09-06 00:57:54 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:54.415969 | orchestrator | 2025-09-06 00:57:54 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:54.416661 | orchestrator | 2025-09-06 00:57:54 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:54.416687 | orchestrator | 2025-09-06 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:57:57.438704 | orchestrator | 2025-09-06 00:57:57 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:57:57.439215 | orchestrator | 2025-09-06 00:57:57 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:57:57.440548 | orchestrator | 2025-09-06 00:57:57 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:57:57.440898 | orchestrator | 2025-09-06 00:57:57 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:57:57.442885 | orchestrator | 2025-09-06 00:57:57 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:57:57.442915 | orchestrator | 2025-09-06 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:00.467567 | orchestrator | 2025-09-06 00:58:00 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:58:00.467998 | orchestrator | 2025-09-06 00:58:00 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:00.468599 | orchestrator | 2025-09-06 00:58:00 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:00.469275 | orchestrator | 2025-09-06 00:58:00 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:00.469978 | orchestrator | 2025-09-06 00:58:00 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:00.469997 | orchestrator | 2025-09-06 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:03.491651 | orchestrator | 2025-09-06 00:58:03 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:58:03.491760 | orchestrator | 2025-09-06 00:58:03 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:03.493212 | orchestrator | 2025-09-06 00:58:03 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:03.493556 | orchestrator | 2025-09-06 00:58:03 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:03.494848 | orchestrator | 2025-09-06 00:58:03 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:03.494877 | orchestrator | 2025-09-06 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:06.518422 | orchestrator | 2025-09-06 00:58:06 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:58:06.518613 | orchestrator | 2025-09-06 00:58:06 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:06.520428 | orchestrator | 2025-09-06 00:58:06 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:06.520972 | orchestrator | 2025-09-06 00:58:06 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:06.521974 | orchestrator | 2025-09-06 00:58:06 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:06.522063 | orchestrator | 2025-09-06 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:09.540942 | orchestrator | 2025-09-06 00:58:09 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:58:09.541121 | orchestrator | 2025-09-06 00:58:09 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:09.541810 | orchestrator | 2025-09-06 00:58:09 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:09.542515 | orchestrator | 2025-09-06 00:58:09 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:09.543000 | orchestrator | 2025-09-06 00:58:09 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:09.543088 | orchestrator | 2025-09-06 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:12.566067 | orchestrator | 2025-09-06 00:58:12 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state STARTED 2025-09-06 00:58:12.566309 | orchestrator | 2025-09-06 00:58:12 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:12.566875 | orchestrator | 2025-09-06 00:58:12 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:12.567348 | orchestrator | 2025-09-06 00:58:12 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:12.568279 | orchestrator | 2025-09-06 00:58:12 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:12.568301 | orchestrator | 2025-09-06 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:15.591052 | orchestrator | 2025-09-06 00:58:15.591108 | orchestrator | 2025-09-06 00:58:15.591115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:58:15.591122 | orchestrator | 2025-09-06 00:58:15.591127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:58:15.591142 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.271) 0:00:00.271 **** 2025-09-06 00:58:15.591147 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:58:15.591152 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:58:15.591157 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:58:15.591162 | orchestrator | ok: [testbed-manager] 2025-09-06 00:58:15.591167 | orchestrator | ok: [testbed-node-3] 2025-09-06 00:58:15.591172 | orchestrator | ok: [testbed-node-4] 2025-09-06 00:58:15.591177 | orchestrator | ok: [testbed-node-5] 2025-09-06 00:58:15.591183 | orchestrator | 2025-09-06 00:58:15.591188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:58:15.591195 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.814) 0:00:01.086 **** 2025-09-06 00:58:15.591200 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591206 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591212 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591218 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591224 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591229 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591235 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-06 00:58:15.591241 | orchestrator | 2025-09-06 00:58:15.591246 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-06 00:58:15.591252 | orchestrator | 2025-09-06 00:58:15.591257 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-06 00:58:15.591263 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:00.938) 0:00:02.024 **** 2025-09-06 00:58:15.591269 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 00:58:15.591287 | orchestrator | 2025-09-06 00:58:15.591294 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-06 00:58:15.591299 | orchestrator | Saturday 06 September 2025 00:57:04 +0000 (0:00:01.828) 0:00:03.853 **** 2025-09-06 00:58:15.591305 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-06 00:58:15.591311 | orchestrator | 2025-09-06 00:58:15.591316 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-06 00:58:15.591322 | orchestrator | Saturday 06 September 2025 00:57:08 +0000 (0:00:03.662) 0:00:07.515 **** 2025-09-06 00:58:15.591328 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-06 00:58:15.591350 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-06 00:58:15.591355 | orchestrator | 2025-09-06 00:58:15.591359 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-06 00:58:15.591364 | orchestrator | Saturday 06 September 2025 00:57:13 +0000 (0:00:05.539) 0:00:13.055 **** 2025-09-06 00:58:15.591368 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 00:58:15.591372 | orchestrator | 2025-09-06 00:58:15.591378 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-06 00:58:15.591383 | orchestrator | Saturday 06 September 2025 00:57:16 +0000 (0:00:02.955) 0:00:16.010 **** 2025-09-06 00:58:15.591388 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 00:58:15.591394 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-06 00:58:15.591400 | orchestrator | 2025-09-06 00:58:15.591405 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-06 00:58:15.591411 | orchestrator | Saturday 06 September 2025 00:57:20 +0000 (0:00:03.871) 0:00:19.882 **** 2025-09-06 00:58:15.591417 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 00:58:15.591422 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-06 00:58:15.591428 | orchestrator | 2025-09-06 00:58:15.591434 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-06 00:58:15.591440 | orchestrator | Saturday 06 September 2025 00:57:27 +0000 (0:00:06.255) 0:00:26.137 **** 2025-09-06 00:58:15.591446 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-06 00:58:15.591452 | orchestrator | 2025-09-06 00:58:15.591457 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:58:15.591463 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591469 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591475 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591481 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591487 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591502 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591508 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.591513 | orchestrator | 2025-09-06 00:58:15.591519 | orchestrator | 2025-09-06 00:58:15.591528 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:58:15.591539 | orchestrator | Saturday 06 September 2025 00:57:31 +0000 (0:00:04.449) 0:00:30.586 **** 2025-09-06 00:58:15.591545 | orchestrator | =============================================================================== 2025-09-06 00:58:15.591551 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.26s 2025-09-06 00:58:15.591556 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.54s 2025-09-06 00:58:15.591562 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.45s 2025-09-06 00:58:15.591568 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.87s 2025-09-06 00:58:15.591573 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.66s 2025-09-06 00:58:15.591579 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.96s 2025-09-06 00:58:15.591585 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.83s 2025-09-06 00:58:15.591591 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-09-06 00:58:15.591597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-09-06 00:58:15.591603 | orchestrator | 2025-09-06 00:58:15.591608 | orchestrator | 2025-09-06 00:58:15.591614 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-06 00:58:15.591620 | orchestrator | 2025-09-06 00:58:15.591626 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-06 00:58:15.591631 | orchestrator | Saturday 06 September 2025 00:56:54 +0000 (0:00:00.199) 0:00:00.199 **** 2025-09-06 00:58:15.591637 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591643 | orchestrator | 2025-09-06 00:58:15.591650 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-06 00:58:15.591656 | orchestrator | Saturday 06 September 2025 00:56:56 +0000 (0:00:01.671) 0:00:01.870 **** 2025-09-06 00:58:15.591663 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591669 | orchestrator | 2025-09-06 00:58:15.591675 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-06 00:58:15.591682 | orchestrator | Saturday 06 September 2025 00:56:57 +0000 (0:00:00.902) 0:00:02.773 **** 2025-09-06 00:58:15.591688 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591694 | orchestrator | 2025-09-06 00:58:15.591701 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-06 00:58:15.591707 | orchestrator | Saturday 06 September 2025 00:56:58 +0000 (0:00:00.913) 0:00:03.686 **** 2025-09-06 00:58:15.591714 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591720 | orchestrator | 2025-09-06 00:58:15.591726 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-06 00:58:15.591733 | orchestrator | Saturday 06 September 2025 00:56:59 +0000 (0:00:01.176) 0:00:04.863 **** 2025-09-06 00:58:15.591739 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591745 | orchestrator | 2025-09-06 00:58:15.591752 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-06 00:58:15.591758 | orchestrator | Saturday 06 September 2025 00:57:00 +0000 (0:00:00.996) 0:00:05.859 **** 2025-09-06 00:58:15.591765 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591771 | orchestrator | 2025-09-06 00:58:15.591777 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-06 00:58:15.591784 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.840) 0:00:06.700 **** 2025-09-06 00:58:15.591790 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591796 | orchestrator | 2025-09-06 00:58:15.591803 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-06 00:58:15.591809 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:01.222) 0:00:07.923 **** 2025-09-06 00:58:15.591815 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591822 | orchestrator | 2025-09-06 00:58:15.591828 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-06 00:58:15.591834 | orchestrator | Saturday 06 September 2025 00:57:03 +0000 (0:00:01.104) 0:00:09.028 **** 2025-09-06 00:58:15.591843 | orchestrator | changed: [testbed-manager] 2025-09-06 00:58:15.591849 | orchestrator | 2025-09-06 00:58:15.591856 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-06 00:58:15.591862 | orchestrator | Saturday 06 September 2025 00:57:49 +0000 (0:00:45.828) 0:00:54.857 **** 2025-09-06 00:58:15.591869 | orchestrator | skipping: [testbed-manager] 2025-09-06 00:58:15.591875 | orchestrator | 2025-09-06 00:58:15.591881 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-06 00:58:15.591887 | orchestrator | 2025-09-06 00:58:15.591894 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-06 00:58:15.591900 | orchestrator | Saturday 06 September 2025 00:57:49 +0000 (0:00:00.126) 0:00:54.984 **** 2025-09-06 00:58:15.591906 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:58:15.591912 | orchestrator | 2025-09-06 00:58:15.591919 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-06 00:58:15.591925 | orchestrator | 2025-09-06 00:58:15.591931 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-06 00:58:15.591937 | orchestrator | Saturday 06 September 2025 00:58:01 +0000 (0:00:11.370) 0:01:06.354 **** 2025-09-06 00:58:15.591944 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:58:15.591950 | orchestrator | 2025-09-06 00:58:15.591957 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-06 00:58:15.591963 | orchestrator | 2025-09-06 00:58:15.591969 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-06 00:58:15.591975 | orchestrator | Saturday 06 September 2025 00:58:02 +0000 (0:00:01.184) 0:01:07.539 **** 2025-09-06 00:58:15.591981 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:58:15.591988 | orchestrator | 2025-09-06 00:58:15.591998 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:58:15.592006 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-06 00:58:15.592011 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.592017 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.592021 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 00:58:15.592026 | orchestrator | 2025-09-06 00:58:15.592031 | orchestrator | 2025-09-06 00:58:15.592036 | orchestrator | 2025-09-06 00:58:15.592040 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:58:15.592045 | orchestrator | Saturday 06 September 2025 00:58:13 +0000 (0:00:11.078) 0:01:18.617 **** 2025-09-06 00:58:15.592050 | orchestrator | =============================================================================== 2025-09-06 00:58:15.592055 | orchestrator | Create admin user ------------------------------------------------------ 45.83s 2025-09-06 00:58:15.592060 | orchestrator | Restart ceph manager service ------------------------------------------- 23.63s 2025-09-06 00:58:15.592065 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.67s 2025-09-06 00:58:15.592071 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.22s 2025-09-06 00:58:15.592141 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.18s 2025-09-06 00:58:15.592148 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.10s 2025-09-06 00:58:15.592154 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2025-09-06 00:58:15.592160 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2025-09-06 00:58:15.592166 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-09-06 00:58:15.592177 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.84s 2025-09-06 00:58:15.592183 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-09-06 00:58:15.592189 | orchestrator | 2025-09-06 00:58:15 | INFO  | Task e046b12e-174b-4d8e-b6b3-c3a2196c56e1 is in state SUCCESS 2025-09-06 00:58:15.592197 | orchestrator | 2025-09-06 00:58:15 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:15.592815 | orchestrator | 2025-09-06 00:58:15 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:15.593269 | orchestrator | 2025-09-06 00:58:15 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:15.593974 | orchestrator | 2025-09-06 00:58:15 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:15.593987 | orchestrator | 2025-09-06 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:18.610364 | orchestrator | 2025-09-06 00:58:18 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:18.613372 | orchestrator | 2025-09-06 00:58:18 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:18.613755 | orchestrator | 2025-09-06 00:58:18 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:18.615210 | orchestrator | 2025-09-06 00:58:18 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:18.615232 | orchestrator | 2025-09-06 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:21.636203 | orchestrator | 2025-09-06 00:58:21 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:21.639005 | orchestrator | 2025-09-06 00:58:21 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:21.639237 | orchestrator | 2025-09-06 00:58:21 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:21.639972 | orchestrator | 2025-09-06 00:58:21 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:21.639996 | orchestrator | 2025-09-06 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:24.663783 | orchestrator | 2025-09-06 00:58:24 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:24.664167 | orchestrator | 2025-09-06 00:58:24 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:24.664745 | orchestrator | 2025-09-06 00:58:24 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:24.665483 | orchestrator | 2025-09-06 00:58:24 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:24.665504 | orchestrator | 2025-09-06 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:27.693563 | orchestrator | 2025-09-06 00:58:27 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:27.693750 | orchestrator | 2025-09-06 00:58:27 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:27.694249 | orchestrator | 2025-09-06 00:58:27 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:27.694810 | orchestrator | 2025-09-06 00:58:27 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:27.694834 | orchestrator | 2025-09-06 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:30.715631 | orchestrator | 2025-09-06 00:58:30 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:30.715824 | orchestrator | 2025-09-06 00:58:30 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:30.716926 | orchestrator | 2025-09-06 00:58:30 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:30.717506 | orchestrator | 2025-09-06 00:58:30 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:30.717527 | orchestrator | 2025-09-06 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:33.744125 | orchestrator | 2025-09-06 00:58:33 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:33.745309 | orchestrator | 2025-09-06 00:58:33 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:33.747152 | orchestrator | 2025-09-06 00:58:33 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:33.748224 | orchestrator | 2025-09-06 00:58:33 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:33.748247 | orchestrator | 2025-09-06 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:36.797455 | orchestrator | 2025-09-06 00:58:36 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:36.798444 | orchestrator | 2025-09-06 00:58:36 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:36.800009 | orchestrator | 2025-09-06 00:58:36 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:36.801631 | orchestrator | 2025-09-06 00:58:36 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:36.801666 | orchestrator | 2025-09-06 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:39.841200 | orchestrator | 2025-09-06 00:58:39 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:39.841928 | orchestrator | 2025-09-06 00:58:39 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:39.843742 | orchestrator | 2025-09-06 00:58:39 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:39.847291 | orchestrator | 2025-09-06 00:58:39 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:39.847343 | orchestrator | 2025-09-06 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:42.890876 | orchestrator | 2025-09-06 00:58:42 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:42.890969 | orchestrator | 2025-09-06 00:58:42 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:42.891857 | orchestrator | 2025-09-06 00:58:42 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:42.892928 | orchestrator | 2025-09-06 00:58:42 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:42.892962 | orchestrator | 2025-09-06 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:45.936721 | orchestrator | 2025-09-06 00:58:45 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:45.937594 | orchestrator | 2025-09-06 00:58:45 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:45.940627 | orchestrator | 2025-09-06 00:58:45 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:45.947978 | orchestrator | 2025-09-06 00:58:45 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:45.948038 | orchestrator | 2025-09-06 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:48.986748 | orchestrator | 2025-09-06 00:58:48 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:48.987481 | orchestrator | 2025-09-06 00:58:48 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:48.988553 | orchestrator | 2025-09-06 00:58:48 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:48.989658 | orchestrator | 2025-09-06 00:58:48 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:48.989682 | orchestrator | 2025-09-06 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:52.046235 | orchestrator | 2025-09-06 00:58:52 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:52.048781 | orchestrator | 2025-09-06 00:58:52 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:52.051141 | orchestrator | 2025-09-06 00:58:52 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:52.053029 | orchestrator | 2025-09-06 00:58:52 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:52.053418 | orchestrator | 2025-09-06 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:55.089159 | orchestrator | 2025-09-06 00:58:55 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:55.089611 | orchestrator | 2025-09-06 00:58:55 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:55.090355 | orchestrator | 2025-09-06 00:58:55 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:55.092148 | orchestrator | 2025-09-06 00:58:55 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:55.092170 | orchestrator | 2025-09-06 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:58:58.140624 | orchestrator | 2025-09-06 00:58:58 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:58:58.143389 | orchestrator | 2025-09-06 00:58:58 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:58:58.146438 | orchestrator | 2025-09-06 00:58:58 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:58:58.147710 | orchestrator | 2025-09-06 00:58:58 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:58:58.147735 | orchestrator | 2025-09-06 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:01.196061 | orchestrator | 2025-09-06 00:59:01 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:01.196156 | orchestrator | 2025-09-06 00:59:01 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:01.197029 | orchestrator | 2025-09-06 00:59:01 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:01.198190 | orchestrator | 2025-09-06 00:59:01 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:01.198212 | orchestrator | 2025-09-06 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:04.245428 | orchestrator | 2025-09-06 00:59:04 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:04.247393 | orchestrator | 2025-09-06 00:59:04 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:04.249179 | orchestrator | 2025-09-06 00:59:04 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:04.251482 | orchestrator | 2025-09-06 00:59:04 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:04.251774 | orchestrator | 2025-09-06 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:07.290935 | orchestrator | 2025-09-06 00:59:07 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:07.291686 | orchestrator | 2025-09-06 00:59:07 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:07.293582 | orchestrator | 2025-09-06 00:59:07 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:07.295388 | orchestrator | 2025-09-06 00:59:07 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:07.295628 | orchestrator | 2025-09-06 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:10.342683 | orchestrator | 2025-09-06 00:59:10 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:10.343132 | orchestrator | 2025-09-06 00:59:10 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:10.343863 | orchestrator | 2025-09-06 00:59:10 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:10.344942 | orchestrator | 2025-09-06 00:59:10 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:10.344965 | orchestrator | 2025-09-06 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:13.372591 | orchestrator | 2025-09-06 00:59:13 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:13.373020 | orchestrator | 2025-09-06 00:59:13 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:13.373633 | orchestrator | 2025-09-06 00:59:13 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:13.374139 | orchestrator | 2025-09-06 00:59:13 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:13.374329 | orchestrator | 2025-09-06 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:16.398792 | orchestrator | 2025-09-06 00:59:16 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:16.399317 | orchestrator | 2025-09-06 00:59:16 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:16.400002 | orchestrator | 2025-09-06 00:59:16 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:16.400562 | orchestrator | 2025-09-06 00:59:16 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:16.400680 | orchestrator | 2025-09-06 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:19.426379 | orchestrator | 2025-09-06 00:59:19 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:19.426528 | orchestrator | 2025-09-06 00:59:19 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:19.426987 | orchestrator | 2025-09-06 00:59:19 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:19.427768 | orchestrator | 2025-09-06 00:59:19 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:19.427792 | orchestrator | 2025-09-06 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:22.454531 | orchestrator | 2025-09-06 00:59:22 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:22.455071 | orchestrator | 2025-09-06 00:59:22 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:22.455472 | orchestrator | 2025-09-06 00:59:22 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:22.456130 | orchestrator | 2025-09-06 00:59:22 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:22.456149 | orchestrator | 2025-09-06 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:25.506622 | orchestrator | 2025-09-06 00:59:25 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:25.510789 | orchestrator | 2025-09-06 00:59:25 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:25.512487 | orchestrator | 2025-09-06 00:59:25 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:25.526860 | orchestrator | 2025-09-06 00:59:25 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:25.526959 | orchestrator | 2025-09-06 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:28.576446 | orchestrator | 2025-09-06 00:59:28 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:28.576585 | orchestrator | 2025-09-06 00:59:28 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:28.577802 | orchestrator | 2025-09-06 00:59:28 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:28.579066 | orchestrator | 2025-09-06 00:59:28 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:28.579088 | orchestrator | 2025-09-06 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:31.626588 | orchestrator | 2025-09-06 00:59:31 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:31.627585 | orchestrator | 2025-09-06 00:59:31 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:31.629787 | orchestrator | 2025-09-06 00:59:31 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:31.631689 | orchestrator | 2025-09-06 00:59:31 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:31.632150 | orchestrator | 2025-09-06 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:34.671594 | orchestrator | 2025-09-06 00:59:34 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:34.673203 | orchestrator | 2025-09-06 00:59:34 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:34.675098 | orchestrator | 2025-09-06 00:59:34 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:34.676670 | orchestrator | 2025-09-06 00:59:34 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:34.676955 | orchestrator | 2025-09-06 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:37.721959 | orchestrator | 2025-09-06 00:59:37 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:37.722757 | orchestrator | 2025-09-06 00:59:37 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:37.723734 | orchestrator | 2025-09-06 00:59:37 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:37.724947 | orchestrator | 2025-09-06 00:59:37 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:37.725420 | orchestrator | 2025-09-06 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:40.767893 | orchestrator | 2025-09-06 00:59:40 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:40.768000 | orchestrator | 2025-09-06 00:59:40 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:40.768015 | orchestrator | 2025-09-06 00:59:40 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state STARTED 2025-09-06 00:59:40.768342 | orchestrator | 2025-09-06 00:59:40 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:40.768454 | orchestrator | 2025-09-06 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:43.876420 | orchestrator | 2025-09-06 00:59:43 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:43.876521 | orchestrator | 2025-09-06 00:59:43 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:43.876535 | orchestrator | 2025-09-06 00:59:43 | INFO  | Task 8e00fa2b-43ac-41db-a0d1-25f3dab05790 is in state SUCCESS 2025-09-06 00:59:43.876547 | orchestrator | 2025-09-06 00:59:43 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:43.876558 | orchestrator | 2025-09-06 00:59:43 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:43.876569 | orchestrator | 2025-09-06 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:43.878571 | orchestrator | 2025-09-06 00:59:43.878610 | orchestrator | 2025-09-06 00:59:43.878622 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 00:59:43.878634 | orchestrator | 2025-09-06 00:59:43.878645 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 00:59:43.878657 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.204) 0:00:00.204 **** 2025-09-06 00:59:43.878668 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:59:43.878680 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:59:43.878691 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:59:43.878702 | orchestrator | 2025-09-06 00:59:43.878713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 00:59:43.878724 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.300) 0:00:00.504 **** 2025-09-06 00:59:43.878735 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-06 00:59:43.878746 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-06 00:59:43.878772 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-06 00:59:43.878783 | orchestrator | 2025-09-06 00:59:43.878794 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-06 00:59:43.878805 | orchestrator | 2025-09-06 00:59:43.878816 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-06 00:59:43.878827 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:00.438) 0:00:00.943 **** 2025-09-06 00:59:43.878838 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:59:43.878849 | orchestrator | 2025-09-06 00:59:43.878862 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-06 00:59:43.878873 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:00.454) 0:00:01.397 **** 2025-09-06 00:59:43.878883 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-06 00:59:43.878894 | orchestrator | 2025-09-06 00:59:43.878905 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-06 00:59:43.878916 | orchestrator | Saturday 06 September 2025 00:57:06 +0000 (0:00:03.854) 0:00:05.252 **** 2025-09-06 00:59:43.878928 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-06 00:59:43.878990 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-06 00:59:43.879025 | orchestrator | 2025-09-06 00:59:43.879037 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-06 00:59:43.879048 | orchestrator | Saturday 06 September 2025 00:57:12 +0000 (0:00:05.925) 0:00:11.177 **** 2025-09-06 00:59:43.879059 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-06 00:59:43.879069 | orchestrator | 2025-09-06 00:59:43.879080 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-06 00:59:43.879091 | orchestrator | Saturday 06 September 2025 00:57:15 +0000 (0:00:03.198) 0:00:14.376 **** 2025-09-06 00:59:43.879103 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 00:59:43.879114 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-06 00:59:43.879124 | orchestrator | 2025-09-06 00:59:43.879135 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-06 00:59:43.879146 | orchestrator | Saturday 06 September 2025 00:57:18 +0000 (0:00:03.619) 0:00:17.996 **** 2025-09-06 00:59:43.879157 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 00:59:43.879168 | orchestrator | 2025-09-06 00:59:43.879178 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-06 00:59:43.879189 | orchestrator | Saturday 06 September 2025 00:57:21 +0000 (0:00:02.957) 0:00:20.953 **** 2025-09-06 00:59:43.879200 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-06 00:59:43.879211 | orchestrator | 2025-09-06 00:59:43.879221 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-06 00:59:43.879232 | orchestrator | Saturday 06 September 2025 00:57:26 +0000 (0:00:04.351) 0:00:25.305 **** 2025-09-06 00:59:43.879284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879344 | orchestrator | 2025-09-06 00:59:43.879355 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-06 00:59:43.879366 | orchestrator | Saturday 06 September 2025 00:57:30 +0000 (0:00:04.439) 0:00:29.744 **** 2025-09-06 00:59:43.879378 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:59:43.879389 | orchestrator | 2025-09-06 00:59:43.879406 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-06 00:59:43.879418 | orchestrator | Saturday 06 September 2025 00:57:31 +0000 (0:00:00.554) 0:00:30.299 **** 2025-09-06 00:59:43.879428 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.879439 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:59:43.879450 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:59:43.879461 | orchestrator | 2025-09-06 00:59:43.879472 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-06 00:59:43.879482 | orchestrator | Saturday 06 September 2025 00:57:34 +0000 (0:00:03.691) 0:00:33.990 **** 2025-09-06 00:59:43.879493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879511 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879533 | orchestrator | 2025-09-06 00:59:43.879544 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-06 00:59:43.879555 | orchestrator | Saturday 06 September 2025 00:57:36 +0000 (0:00:01.528) 0:00:35.519 **** 2025-09-06 00:59:43.879565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879576 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 00:59:43.879598 | orchestrator | 2025-09-06 00:59:43.879614 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-06 00:59:43.879625 | orchestrator | Saturday 06 September 2025 00:57:37 +0000 (0:00:01.106) 0:00:36.625 **** 2025-09-06 00:59:43.879636 | orchestrator | ok: [testbed-node-0] 2025-09-06 00:59:43.879647 | orchestrator | ok: [testbed-node-1] 2025-09-06 00:59:43.879658 | orchestrator | ok: [testbed-node-2] 2025-09-06 00:59:43.879669 | orchestrator | 2025-09-06 00:59:43.879679 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-06 00:59:43.879690 | orchestrator | Saturday 06 September 2025 00:57:38 +0000 (0:00:00.605) 0:00:37.230 **** 2025-09-06 00:59:43.879701 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.879712 | orchestrator | 2025-09-06 00:59:43.879723 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-06 00:59:43.879734 | orchestrator | Saturday 06 September 2025 00:57:38 +0000 (0:00:00.258) 0:00:37.489 **** 2025-09-06 00:59:43.879744 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.879755 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.879766 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.879776 | orchestrator | 2025-09-06 00:59:43.879787 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-06 00:59:43.879798 | orchestrator | Saturday 06 September 2025 00:57:38 +0000 (0:00:00.264) 0:00:37.754 **** 2025-09-06 00:59:43.879809 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 00:59:43.879819 | orchestrator | 2025-09-06 00:59:43.879830 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-06 00:59:43.879841 | orchestrator | Saturday 06 September 2025 00:57:39 +0000 (0:00:00.504) 0:00:38.259 **** 2025-09-06 00:59:43.879859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.879909 | orchestrator | 2025-09-06 00:59:43.879920 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-06 00:59:43.879937 | orchestrator | Saturday 06 September 2025 00:57:42 +0000 (0:00:03.565) 0:00:41.825 **** 2025-09-06 00:59:43.879957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.879970 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.879987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.880000 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.880037 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880048 | orchestrator | 2025-09-06 00:59:43.880059 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-06 00:59:43.880070 | orchestrator | Saturday 06 September 2025 00:57:47 +0000 (0:00:04.686) 0:00:46.511 **** 2025-09-06 00:59:43.880086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.880098 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.880135 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-06 00:59:43.880170 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880181 | orchestrator | 2025-09-06 00:59:43.880192 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-06 00:59:43.880203 | orchestrator | Saturday 06 September 2025 00:57:50 +0000 (0:00:03.594) 0:00:50.105 **** 2025-09-06 00:59:43.880214 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880225 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880235 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880277 | orchestrator | 2025-09-06 00:59:43.880288 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-06 00:59:43.880299 | orchestrator | Saturday 06 September 2025 00:57:54 +0000 (0:00:03.018) 0:00:53.124 **** 2025-09-06 00:59:43.880317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880374 | orchestrator | 2025-09-06 00:59:43.880385 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-06 00:59:43.880396 | orchestrator | Saturday 06 September 2025 00:57:57 +0000 (0:00:03.835) 0:00:56.959 **** 2025-09-06 00:59:43.880406 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.880417 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:59:43.880428 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:59:43.880438 | orchestrator | 2025-09-06 00:59:43.880449 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-06 00:59:43.880460 | orchestrator | Saturday 06 September 2025 00:58:04 +0000 (0:00:07.017) 0:01:03.977 **** 2025-09-06 00:59:43.880471 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880481 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880492 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880503 | orchestrator | 2025-09-06 00:59:43.880514 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-06 00:59:43.880530 | orchestrator | Saturday 06 September 2025 00:58:09 +0000 (0:00:04.583) 0:01:08.560 **** 2025-09-06 00:59:43.880542 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880552 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880563 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880574 | orchestrator | 2025-09-06 00:59:43.880585 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-06 00:59:43.880596 | orchestrator | Saturday 06 September 2025 00:58:15 +0000 (0:00:06.509) 0:01:15.070 **** 2025-09-06 00:59:43.880607 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880618 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880628 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880639 | orchestrator | 2025-09-06 00:59:43.880650 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-06 00:59:43.880661 | orchestrator | Saturday 06 September 2025 00:58:21 +0000 (0:00:05.622) 0:01:20.693 **** 2025-09-06 00:59:43.880672 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880683 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880693 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880704 | orchestrator | 2025-09-06 00:59:43.880715 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-06 00:59:43.880726 | orchestrator | Saturday 06 September 2025 00:58:25 +0000 (0:00:03.939) 0:01:24.632 **** 2025-09-06 00:59:43.880737 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880748 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880758 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880769 | orchestrator | 2025-09-06 00:59:43.880780 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-06 00:59:43.880791 | orchestrator | Saturday 06 September 2025 00:58:26 +0000 (0:00:00.490) 0:01:25.123 **** 2025-09-06 00:59:43.880802 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-06 00:59:43.880813 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.880828 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-06 00:59:43.880839 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.880858 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-06 00:59:43.880868 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.880879 | orchestrator | 2025-09-06 00:59:43.880890 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-06 00:59:43.880901 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:03.497) 0:01:28.620 **** 2025-09-06 00:59:43.880913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-06 00:59:43.880970 | orchestrator | 2025-09-06 00:59:43.880982 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-06 00:59:43.880992 | orchestrator | Saturday 06 September 2025 00:58:32 +0000 (0:00:03.464) 0:01:32.085 **** 2025-09-06 00:59:43.881003 | orchestrator | skipping: [testbed-node-0] 2025-09-06 00:59:43.881014 | orchestrator | skipping: [testbed-node-1] 2025-09-06 00:59:43.881025 | orchestrator | skipping: [testbed-node-2] 2025-09-06 00:59:43.881035 | orchestrator | 2025-09-06 00:59:43.881046 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-06 00:59:43.881057 | orchestrator | Saturday 06 September 2025 00:58:33 +0000 (0:00:00.260) 0:01:32.346 **** 2025-09-06 00:59:43.881068 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881079 | orchestrator | 2025-09-06 00:59:43.881089 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-06 00:59:43.881100 | orchestrator | Saturday 06 September 2025 00:58:35 +0000 (0:00:02.108) 0:01:34.454 **** 2025-09-06 00:59:43.881111 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881122 | orchestrator | 2025-09-06 00:59:43.881132 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-06 00:59:43.881143 | orchestrator | Saturday 06 September 2025 00:58:37 +0000 (0:00:02.310) 0:01:36.765 **** 2025-09-06 00:59:43.881154 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881164 | orchestrator | 2025-09-06 00:59:43.881175 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-06 00:59:43.881186 | orchestrator | Saturday 06 September 2025 00:58:39 +0000 (0:00:02.157) 0:01:38.923 **** 2025-09-06 00:59:43.881196 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881207 | orchestrator | 2025-09-06 00:59:43.881218 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-06 00:59:43.881229 | orchestrator | Saturday 06 September 2025 00:59:04 +0000 (0:00:24.365) 0:02:03.288 **** 2025-09-06 00:59:43.881294 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881308 | orchestrator | 2025-09-06 00:59:43.881325 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-06 00:59:43.881336 | orchestrator | Saturday 06 September 2025 00:59:06 +0000 (0:00:01.966) 0:02:05.255 **** 2025-09-06 00:59:43.881347 | orchestrator | 2025-09-06 00:59:43.881358 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-06 00:59:43.881369 | orchestrator | Saturday 06 September 2025 00:59:06 +0000 (0:00:00.068) 0:02:05.324 **** 2025-09-06 00:59:43.881389 | orchestrator | 2025-09-06 00:59:43.881400 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-06 00:59:43.881411 | orchestrator | Saturday 06 September 2025 00:59:06 +0000 (0:00:00.084) 0:02:05.409 **** 2025-09-06 00:59:43.881421 | orchestrator | 2025-09-06 00:59:43.881432 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-06 00:59:43.881443 | orchestrator | Saturday 06 September 2025 00:59:06 +0000 (0:00:00.068) 0:02:05.477 **** 2025-09-06 00:59:43.881454 | orchestrator | changed: [testbed-node-0] 2025-09-06 00:59:43.881464 | orchestrator | changed: [testbed-node-2] 2025-09-06 00:59:43.881475 | orchestrator | changed: [testbed-node-1] 2025-09-06 00:59:43.881486 | orchestrator | 2025-09-06 00:59:43.881496 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 00:59:43.881509 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-06 00:59:43.881521 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 00:59:43.881532 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 00:59:43.881543 | orchestrator | 2025-09-06 00:59:43.881554 | orchestrator | 2025-09-06 00:59:43.881570 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 00:59:43.881581 | orchestrator | Saturday 06 September 2025 00:59:41 +0000 (0:00:35.002) 0:02:40.480 **** 2025-09-06 00:59:43.881592 | orchestrator | =============================================================================== 2025-09-06 00:59:43.881603 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.00s 2025-09-06 00:59:43.881614 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.37s 2025-09-06 00:59:43.881624 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.02s 2025-09-06 00:59:43.881635 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.51s 2025-09-06 00:59:43.881646 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.93s 2025-09-06 00:59:43.881656 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.62s 2025-09-06 00:59:43.881667 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.69s 2025-09-06 00:59:43.881678 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.58s 2025-09-06 00:59:43.881688 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.44s 2025-09-06 00:59:43.881699 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.35s 2025-09-06 00:59:43.881710 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.94s 2025-09-06 00:59:43.881720 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.85s 2025-09-06 00:59:43.881731 | orchestrator | glance : Copying over config.json files for services -------------------- 3.84s 2025-09-06 00:59:43.881742 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.69s 2025-09-06 00:59:43.881752 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.62s 2025-09-06 00:59:43.881763 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.59s 2025-09-06 00:59:43.881773 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.57s 2025-09-06 00:59:43.881784 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.50s 2025-09-06 00:59:43.881795 | orchestrator | glance : Check glance containers ---------------------------------------- 3.46s 2025-09-06 00:59:43.881806 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.20s 2025-09-06 00:59:46.881957 | orchestrator | 2025-09-06 00:59:46 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:46.883712 | orchestrator | 2025-09-06 00:59:46 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:46.885462 | orchestrator | 2025-09-06 00:59:46 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:46.887117 | orchestrator | 2025-09-06 00:59:46 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:46.887356 | orchestrator | 2025-09-06 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:49.933300 | orchestrator | 2025-09-06 00:59:49 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:49.935264 | orchestrator | 2025-09-06 00:59:49 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:49.937042 | orchestrator | 2025-09-06 00:59:49 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:49.939105 | orchestrator | 2025-09-06 00:59:49 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:49.939258 | orchestrator | 2025-09-06 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:52.998324 | orchestrator | 2025-09-06 00:59:53 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:53.001826 | orchestrator | 2025-09-06 00:59:53 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:53.004283 | orchestrator | 2025-09-06 00:59:53 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:53.006486 | orchestrator | 2025-09-06 00:59:53 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:53.006514 | orchestrator | 2025-09-06 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:56.053034 | orchestrator | 2025-09-06 00:59:56 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:56.053934 | orchestrator | 2025-09-06 00:59:56 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:56.055821 | orchestrator | 2025-09-06 00:59:56 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:56.057018 | orchestrator | 2025-09-06 00:59:56 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:56.057060 | orchestrator | 2025-09-06 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 00:59:59.103157 | orchestrator | 2025-09-06 00:59:59 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 00:59:59.103971 | orchestrator | 2025-09-06 00:59:59 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 00:59:59.105822 | orchestrator | 2025-09-06 00:59:59 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 00:59:59.107077 | orchestrator | 2025-09-06 00:59:59 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 00:59:59.107102 | orchestrator | 2025-09-06 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:02.152374 | orchestrator | 2025-09-06 01:00:02 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:02.154550 | orchestrator | 2025-09-06 01:00:02 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:02.156044 | orchestrator | 2025-09-06 01:00:02 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:02.157840 | orchestrator | 2025-09-06 01:00:02 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 01:00:02.158129 | orchestrator | 2025-09-06 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:05.198529 | orchestrator | 2025-09-06 01:00:05 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:05.199691 | orchestrator | 2025-09-06 01:00:05 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:05.201696 | orchestrator | 2025-09-06 01:00:05 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:05.203487 | orchestrator | 2025-09-06 01:00:05 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state STARTED 2025-09-06 01:00:05.203669 | orchestrator | 2025-09-06 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:08.243976 | orchestrator | 2025-09-06 01:00:08 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:08.245819 | orchestrator | 2025-09-06 01:00:08 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:08.248981 | orchestrator | 2025-09-06 01:00:08 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:08.255294 | orchestrator | 2025-09-06 01:00:08 | INFO  | Task 24ae297c-1054-46fe-806c-ff2905cace3a is in state SUCCESS 2025-09-06 01:00:08.257907 | orchestrator | 2025-09-06 01:00:08.257940 | orchestrator | 2025-09-06 01:00:08.257953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:00:08.257965 | orchestrator | 2025-09-06 01:00:08.257976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:00:08.257988 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:00.243) 0:00:00.243 **** 2025-09-06 01:00:08.257999 | orchestrator | ok: [testbed-manager] 2025-09-06 01:00:08.258124 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:00:08.258136 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:00:08.258147 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:00:08.258158 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:00:08.258169 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:00:08.258180 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:00:08.258191 | orchestrator | 2025-09-06 01:00:08.258202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:00:08.258241 | orchestrator | Saturday 06 September 2025 00:56:55 +0000 (0:00:00.670) 0:00:00.914 **** 2025-09-06 01:00:08.258253 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258265 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258276 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258287 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258298 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258309 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258320 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-06 01:00:08.258331 | orchestrator | 2025-09-06 01:00:08.258342 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-06 01:00:08.258353 | orchestrator | 2025-09-06 01:00:08.258364 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-06 01:00:08.258375 | orchestrator | Saturday 06 September 2025 00:56:56 +0000 (0:00:00.627) 0:00:01.541 **** 2025-09-06 01:00:08.258388 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:00:08.258400 | orchestrator | 2025-09-06 01:00:08.258411 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-06 01:00:08.258423 | orchestrator | Saturday 06 September 2025 00:56:57 +0000 (0:00:01.332) 0:00:02.873 **** 2025-09-06 01:00:08.258453 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 01:00:08.258490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258563 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.258655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258668 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258768 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 01:00:08.258783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.258850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.258900 | orchestrator | 2025-09-06 01:00:08.258911 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-06 01:00:08.258923 | orchestrator | Saturday 06 September 2025 00:57:01 +0000 (0:00:03.154) 0:00:06.028 **** 2025-09-06 01:00:08.258934 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:00:08.258945 | orchestrator | 2025-09-06 01:00:08.258957 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-06 01:00:08.258969 | orchestrator | Saturday 06 September 2025 00:57:02 +0000 (0:00:01.182) 0:00:07.211 **** 2025-09-06 01:00:08.258985 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 01:00:08.258997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.259093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 01:00:08.259209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259301 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.259363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.259787 | orchestrator | 2025-09-06 01:00:08.259798 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-06 01:00:08.259843 | orchestrator | Saturday 06 September 2025 00:57:08 +0000 (0:00:05.841) 0:00:13.052 **** 2025-09-06 01:00:08.259855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-06 01:00:08.259872 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.259884 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.259896 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-06 01:00:08.260989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261127 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.261161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261321 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.261332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261343 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.261354 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.261372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261407 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.261423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261467 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.261479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261521 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.261532 | orchestrator | 2025-09-06 01:00:08.261547 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-06 01:00:08.261562 | orchestrator | Saturday 06 September 2025 00:57:09 +0000 (0:00:01.598) 0:00:14.651 **** 2025-09-06 01:00:08.261576 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-06 01:00:08.261594 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-06 01:00:08.261643 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261743 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.261756 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.261769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261846 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.261858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-06 01:00:08.261932 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.261948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.261960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.261983 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.261994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.262009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.262075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.262087 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.262098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-06 01:00:08.262109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.262128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-06 01:00:08.262140 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.262151 | orchestrator | 2025-09-06 01:00:08.262162 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-06 01:00:08.262173 | orchestrator | Saturday 06 September 2025 00:57:11 +0000 (0:00:01.681) 0:00:16.333 **** 2025-09-06 01:00:08.262185 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 01:00:08.262196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.262308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262452 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 01:00:08.262465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262533 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.262561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.262595 | orchestrator | 2025-09-06 01:00:08.262606 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-06 01:00:08.262617 | orchestrator | Saturday 06 September 2025 00:57:16 +0000 (0:00:05.350) 0:00:21.683 **** 2025-09-06 01:00:08.262628 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 01:00:08.262639 | orchestrator | 2025-09-06 01:00:08.262650 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-06 01:00:08.262674 | orchestrator | Saturday 06 September 2025 00:57:17 +0000 (0:00:00.959) 0:00:22.643 **** 2025-09-06 01:00:08.262686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262704 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262732 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262744 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262755 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262772 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262801 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262812 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.262827 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090352, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262851 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262879 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262896 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262908 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262935 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262964 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262984 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.262996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263008 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263023 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263034 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263046 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263057 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090392, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8713195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.263080 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263092 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263104 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263119 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263131 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263154 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263200 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263243 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263256 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263279 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263302 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263314 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263326 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263341 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263353 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263365 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263411 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090348, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.263422 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263438 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263450 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263461 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263478 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263496 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263508 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263534 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263546 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263574 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263591 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263604 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263630 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263660 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263672 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263689 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263712 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263742 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263761 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263789 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263801 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090377, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8693194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.263828 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263840 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263857 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263885 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263897 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263909 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263966 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263983 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.263995 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264006 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264021 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264038 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264049 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.264061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264073 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264089 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090342, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8640707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264101 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264113 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264128 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264145 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264156 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264185 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264197 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264208 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.264235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264251 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264279 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.264290 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264301 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.264313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264324 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.264402 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-06 01:00:08.264416 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.264428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090355, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8663194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090365, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8690262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264462 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090359, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.866887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090350, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8653195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264485 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090388, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8705952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090331, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8625855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264512 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090404, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264524 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090384, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8699315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264535 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090346, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8643193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264559 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090337, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8634589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090363, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8682168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090361, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8673193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264593 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090402, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8733194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-06 01:00:08.264604 | orchestrator | 2025-09-06 01:00:08.264615 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-06 01:00:08.264626 | orchestrator | Saturday 06 September 2025 00:57:42 +0000 (0:00:24.339) 0:00:46.982 **** 2025-09-06 01:00:08.264637 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 01:00:08.264648 | orchestrator | 2025-09-06 01:00:08.264658 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-06 01:00:08.264674 | orchestrator | Saturday 06 September 2025 00:57:42 +0000 (0:00:00.676) 0:00:47.659 **** 2025-09-06 01:00:08.264686 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264709 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.264720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264730 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.264742 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 01:00:08.264752 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264769 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264780 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.264791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264802 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.264813 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264834 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.264845 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264856 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.264867 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264888 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.264899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264910 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.264920 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264942 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.264953 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264963 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.264974 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.264985 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.264999 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.265011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.265021 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.265032 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.265043 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.265053 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-06 01:00:08.265064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-06 01:00:08.265075 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-06 01:00:08.265086 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-06 01:00:08.265096 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:00:08.265107 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-06 01:00:08.265118 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-06 01:00:08.265128 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-06 01:00:08.265139 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 01:00:08.265150 | orchestrator | 2025-09-06 01:00:08.265160 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-06 01:00:08.265171 | orchestrator | Saturday 06 September 2025 00:57:44 +0000 (0:00:02.196) 0:00:49.855 **** 2025-09-06 01:00:08.265182 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265193 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.265203 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265265 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.265277 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265288 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.265299 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265317 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.265328 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265339 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.265350 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-06 01:00:08.265361 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.265371 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-06 01:00:08.265383 | orchestrator | 2025-09-06 01:00:08.265394 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-06 01:00:08.265405 | orchestrator | Saturday 06 September 2025 00:58:00 +0000 (0:00:15.751) 0:01:05.606 **** 2025-09-06 01:00:08.265415 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265426 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265443 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.265455 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.265465 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265476 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.265487 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265498 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.265509 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265520 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.265531 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-06 01:00:08.265541 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.265552 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-06 01:00:08.265563 | orchestrator | 2025-09-06 01:00:08.265574 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-06 01:00:08.265585 | orchestrator | Saturday 06 September 2025 00:58:04 +0000 (0:00:03.603) 0:01:09.210 **** 2025-09-06 01:00:08.265596 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265608 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265619 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265630 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-06 01:00:08.265641 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.265651 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.265660 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.265670 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265680 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.265694 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265704 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.265714 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-06 01:00:08.265723 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.265738 | orchestrator | 2025-09-06 01:00:08.265748 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-06 01:00:08.265758 | orchestrator | Saturday 06 September 2025 00:58:06 +0000 (0:00:02.703) 0:01:11.914 **** 2025-09-06 01:00:08.265767 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 01:00:08.265777 | orchestrator | 2025-09-06 01:00:08.265787 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-06 01:00:08.265796 | orchestrator | Saturday 06 September 2025 00:58:08 +0000 (0:00:01.269) 0:01:13.183 **** 2025-09-06 01:00:08.265806 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.265816 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.265825 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.265835 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.265845 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.265854 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.265864 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.265874 | orchestrator | 2025-09-06 01:00:08.265883 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-06 01:00:08.265893 | orchestrator | Saturday 06 September 2025 00:58:09 +0000 (0:00:00.866) 0:01:14.050 **** 2025-09-06 01:00:08.265903 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.265912 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.265922 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.265932 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.265941 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.265951 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.265960 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.265970 | orchestrator | 2025-09-06 01:00:08.265979 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-06 01:00:08.265989 | orchestrator | Saturday 06 September 2025 00:58:11 +0000 (0:00:02.652) 0:01:16.702 **** 2025-09-06 01:00:08.265999 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266009 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.266041 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266052 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.266062 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266071 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.266081 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266090 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266100 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.266110 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.266124 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266135 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.266144 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-06 01:00:08.266154 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.266164 | orchestrator | 2025-09-06 01:00:08.266174 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-06 01:00:08.266184 | orchestrator | Saturday 06 September 2025 00:58:14 +0000 (0:00:03.183) 0:01:19.885 **** 2025-09-06 01:00:08.266194 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266203 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266227 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.266237 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.266253 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266263 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266272 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.266282 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.266292 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266302 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.266311 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-06 01:00:08.266321 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.266331 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-06 01:00:08.266340 | orchestrator | 2025-09-06 01:00:08.266350 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-06 01:00:08.266360 | orchestrator | Saturday 06 September 2025 00:58:18 +0000 (0:00:03.464) 0:01:23.349 **** 2025-09-06 01:00:08.266369 | orchestrator | [WARNING]: Skipped 2025-09-06 01:00:08.266379 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-06 01:00:08.266389 | orchestrator | due to this access issue: 2025-09-06 01:00:08.266403 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-06 01:00:08.266412 | orchestrator | not a directory 2025-09-06 01:00:08.266422 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-06 01:00:08.266432 | orchestrator | 2025-09-06 01:00:08.266442 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-06 01:00:08.266451 | orchestrator | Saturday 06 September 2025 00:58:20 +0000 (0:00:02.254) 0:01:25.604 **** 2025-09-06 01:00:08.266461 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.266471 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.266480 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.266490 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.266499 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.266509 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.266518 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.266528 | orchestrator | 2025-09-06 01:00:08.266537 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-06 01:00:08.266547 | orchestrator | Saturday 06 September 2025 00:58:21 +0000 (0:00:00.835) 0:01:26.440 **** 2025-09-06 01:00:08.266557 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.266566 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:08.266576 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:08.266586 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:08.266595 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:08.266605 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:08.266614 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:08.266624 | orchestrator | 2025-09-06 01:00:08.266633 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-06 01:00:08.266643 | orchestrator | Saturday 06 September 2025 00:58:22 +0000 (0:00:00.531) 0:01:26.971 **** 2025-09-06 01:00:08.266653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-06 01:00:08.266697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266708 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-06 01:00:08.266795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-06 01:00:08.266883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.266939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.266954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.267053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.267075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.267085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-06 01:00:08.267099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-06 01:00:08.267109 | orchestrator | 2025-09-06 01:00:08.267119 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-06 01:00:08.267129 | orchestrator | Saturday 06 September 2025 00:58:27 +0000 (0:00:05.314) 0:01:32.286 **** 2025-09-06 01:00:08.267139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-06 01:00:08.267149 | orchestrator | skipping: [testbed-manager] 2025-09-06 01:00:08.267158 | orchestrator | 2025-09-06 01:00:08.267168 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267184 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:01.834) 0:01:34.121 **** 2025-09-06 01:00:08.267194 | orchestrator | 2025-09-06 01:00:08.267204 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267257 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.066) 0:01:34.188 **** 2025-09-06 01:00:08.267268 | orchestrator | 2025-09-06 01:00:08.267278 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267288 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.060) 0:01:34.248 **** 2025-09-06 01:00:08.267298 | orchestrator | 2025-09-06 01:00:08.267307 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267317 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.060) 0:01:34.309 **** 2025-09-06 01:00:08.267327 | orchestrator | 2025-09-06 01:00:08.267336 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267346 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.157) 0:01:34.466 **** 2025-09-06 01:00:08.267356 | orchestrator | 2025-09-06 01:00:08.267366 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267375 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.059) 0:01:34.525 **** 2025-09-06 01:00:08.267384 | orchestrator | 2025-09-06 01:00:08.267392 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-06 01:00:08.267400 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.057) 0:01:34.582 **** 2025-09-06 01:00:08.267408 | orchestrator | 2025-09-06 01:00:08.267415 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-06 01:00:08.267423 | orchestrator | Saturday 06 September 2025 00:58:29 +0000 (0:00:00.096) 0:01:34.679 **** 2025-09-06 01:00:08.267431 | orchestrator | changed: [testbed-manager] 2025-09-06 01:00:08.267439 | orchestrator | 2025-09-06 01:00:08.267447 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-06 01:00:08.267455 | orchestrator | Saturday 06 September 2025 00:58:47 +0000 (0:00:17.559) 0:01:52.239 **** 2025-09-06 01:00:08.267463 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.267471 | orchestrator | changed: [testbed-manager] 2025-09-06 01:00:08.267485 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.267494 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:08.267502 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:08.267510 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.267517 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:08.267525 | orchestrator | 2025-09-06 01:00:08.267533 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-06 01:00:08.267541 | orchestrator | Saturday 06 September 2025 00:59:00 +0000 (0:00:13.111) 0:02:05.351 **** 2025-09-06 01:00:08.267549 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.267557 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.267565 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.267573 | orchestrator | 2025-09-06 01:00:08.267580 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-06 01:00:08.267589 | orchestrator | Saturday 06 September 2025 00:59:09 +0000 (0:00:09.580) 0:02:14.931 **** 2025-09-06 01:00:08.267596 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.267604 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.267612 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.267620 | orchestrator | 2025-09-06 01:00:08.267628 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-06 01:00:08.267636 | orchestrator | Saturday 06 September 2025 00:59:20 +0000 (0:00:10.492) 0:02:25.424 **** 2025-09-06 01:00:08.267644 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:08.267651 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.267659 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:08.267667 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:08.267675 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.267688 | orchestrator | changed: [testbed-manager] 2025-09-06 01:00:08.267696 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.267704 | orchestrator | 2025-09-06 01:00:08.267711 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-06 01:00:08.267719 | orchestrator | Saturday 06 September 2025 00:59:33 +0000 (0:00:13.172) 0:02:38.596 **** 2025-09-06 01:00:08.267727 | orchestrator | changed: [testbed-manager] 2025-09-06 01:00:08.267735 | orchestrator | 2025-09-06 01:00:08.267743 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-06 01:00:08.267751 | orchestrator | Saturday 06 September 2025 00:59:42 +0000 (0:00:08.599) 0:02:47.195 **** 2025-09-06 01:00:08.267759 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:08.267767 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:08.267775 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:08.267783 | orchestrator | 2025-09-06 01:00:08.267790 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-06 01:00:08.267799 | orchestrator | Saturday 06 September 2025 00:59:47 +0000 (0:00:05.447) 0:02:52.643 **** 2025-09-06 01:00:08.267806 | orchestrator | changed: [testbed-manager] 2025-09-06 01:00:08.267814 | orchestrator | 2025-09-06 01:00:08.267826 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-06 01:00:08.267834 | orchestrator | Saturday 06 September 2025 00:59:57 +0000 (0:00:09.576) 0:03:02.219 **** 2025-09-06 01:00:08.267842 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:08.267850 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:08.267858 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:08.267866 | orchestrator | 2025-09-06 01:00:08.267874 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:00:08.267882 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-06 01:00:08.267890 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 01:00:08.267898 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 01:00:08.267907 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 01:00:08.267915 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-06 01:00:08.267923 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-06 01:00:08.267931 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-06 01:00:08.267939 | orchestrator | 2025-09-06 01:00:08.267946 | orchestrator | 2025-09-06 01:00:08.267954 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:00:08.267963 | orchestrator | Saturday 06 September 2025 01:00:07 +0000 (0:00:10.577) 0:03:12.797 **** 2025-09-06 01:00:08.267971 | orchestrator | =============================================================================== 2025-09-06 01:00:08.267979 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.34s 2025-09-06 01:00:08.267987 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.56s 2025-09-06 01:00:08.267995 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.75s 2025-09-06 01:00:08.268002 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.17s 2025-09-06 01:00:08.268010 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.11s 2025-09-06 01:00:08.268022 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.58s 2025-09-06 01:00:08.268034 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.49s 2025-09-06 01:00:08.268042 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.58s 2025-09-06 01:00:08.268050 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.58s 2025-09-06 01:00:08.268058 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.60s 2025-09-06 01:00:08.268066 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.84s 2025-09-06 01:00:08.268074 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.45s 2025-09-06 01:00:08.268082 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.35s 2025-09-06 01:00:08.268089 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.32s 2025-09-06 01:00:08.268097 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.60s 2025-09-06 01:00:08.268105 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.47s 2025-09-06 01:00:08.268113 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.18s 2025-09-06 01:00:08.268121 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.15s 2025-09-06 01:00:08.268129 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.70s 2025-09-06 01:00:08.268137 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.65s 2025-09-06 01:00:08.268145 | orchestrator | 2025-09-06 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:11.305334 | orchestrator | 2025-09-06 01:00:11 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:11.306828 | orchestrator | 2025-09-06 01:00:11 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:11.308976 | orchestrator | 2025-09-06 01:00:11 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:11.311000 | orchestrator | 2025-09-06 01:00:11 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:11.311039 | orchestrator | 2025-09-06 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:14.353519 | orchestrator | 2025-09-06 01:00:14 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:14.353803 | orchestrator | 2025-09-06 01:00:14 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:14.354625 | orchestrator | 2025-09-06 01:00:14 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:14.355436 | orchestrator | 2025-09-06 01:00:14 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:14.355458 | orchestrator | 2025-09-06 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:17.395250 | orchestrator | 2025-09-06 01:00:17 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:17.396511 | orchestrator | 2025-09-06 01:00:17 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:17.398122 | orchestrator | 2025-09-06 01:00:17 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:17.399377 | orchestrator | 2025-09-06 01:00:17 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:17.399405 | orchestrator | 2025-09-06 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:20.452740 | orchestrator | 2025-09-06 01:00:20 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:20.453568 | orchestrator | 2025-09-06 01:00:20 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:20.456854 | orchestrator | 2025-09-06 01:00:20 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:20.458680 | orchestrator | 2025-09-06 01:00:20 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:20.458705 | orchestrator | 2025-09-06 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:23.502601 | orchestrator | 2025-09-06 01:00:23 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:23.504110 | orchestrator | 2025-09-06 01:00:23 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:23.506635 | orchestrator | 2025-09-06 01:00:23 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:23.509350 | orchestrator | 2025-09-06 01:00:23 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:23.509925 | orchestrator | 2025-09-06 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:26.551181 | orchestrator | 2025-09-06 01:00:26 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:26.551447 | orchestrator | 2025-09-06 01:00:26 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:26.552145 | orchestrator | 2025-09-06 01:00:26 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:26.553105 | orchestrator | 2025-09-06 01:00:26 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:26.553352 | orchestrator | 2025-09-06 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:29.599363 | orchestrator | 2025-09-06 01:00:29 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:29.602290 | orchestrator | 2025-09-06 01:00:29 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:29.603090 | orchestrator | 2025-09-06 01:00:29 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:29.604720 | orchestrator | 2025-09-06 01:00:29 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:29.604745 | orchestrator | 2025-09-06 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:32.641713 | orchestrator | 2025-09-06 01:00:32 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:32.646061 | orchestrator | 2025-09-06 01:00:32 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:32.648333 | orchestrator | 2025-09-06 01:00:32 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:32.650325 | orchestrator | 2025-09-06 01:00:32 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:32.650349 | orchestrator | 2025-09-06 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:35.679573 | orchestrator | 2025-09-06 01:00:35 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:35.680255 | orchestrator | 2025-09-06 01:00:35 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:35.681119 | orchestrator | 2025-09-06 01:00:35 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:35.681872 | orchestrator | 2025-09-06 01:00:35 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:35.681902 | orchestrator | 2025-09-06 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:38.705016 | orchestrator | 2025-09-06 01:00:38 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:38.705237 | orchestrator | 2025-09-06 01:00:38 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:38.705948 | orchestrator | 2025-09-06 01:00:38 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:38.706481 | orchestrator | 2025-09-06 01:00:38 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:38.706573 | orchestrator | 2025-09-06 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:41.731544 | orchestrator | 2025-09-06 01:00:41 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:41.734147 | orchestrator | 2025-09-06 01:00:41 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:41.735500 | orchestrator | 2025-09-06 01:00:41 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:41.736999 | orchestrator | 2025-09-06 01:00:41 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:41.737605 | orchestrator | 2025-09-06 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:44.762635 | orchestrator | 2025-09-06 01:00:44 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:44.763440 | orchestrator | 2025-09-06 01:00:44 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:44.763865 | orchestrator | 2025-09-06 01:00:44 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:44.764613 | orchestrator | 2025-09-06 01:00:44 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:44.765263 | orchestrator | 2025-09-06 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:47.806727 | orchestrator | 2025-09-06 01:00:47 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:47.806802 | orchestrator | 2025-09-06 01:00:47 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:47.806815 | orchestrator | 2025-09-06 01:00:47 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:47.806827 | orchestrator | 2025-09-06 01:00:47 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:47.806838 | orchestrator | 2025-09-06 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:50.816912 | orchestrator | 2025-09-06 01:00:50 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:50.817095 | orchestrator | 2025-09-06 01:00:50 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state STARTED 2025-09-06 01:00:50.818142 | orchestrator | 2025-09-06 01:00:50 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:50.819832 | orchestrator | 2025-09-06 01:00:50 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:50.819854 | orchestrator | 2025-09-06 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:53.850750 | orchestrator | 2025-09-06 01:00:53 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:53.851729 | orchestrator | 2025-09-06 01:00:53 | INFO  | Task 986c6f89-dbfb-45e4-aeb4-7375e4650192 is in state SUCCESS 2025-09-06 01:00:53.853040 | orchestrator | 2025-09-06 01:00:53.853069 | orchestrator | 2025-09-06 01:00:53.853080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:00:53.853116 | orchestrator | 2025-09-06 01:00:53.853128 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:00:53.853139 | orchestrator | Saturday 06 September 2025 00:57:12 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-06 01:00:53.853151 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:00:53.853180 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:00:53.853191 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:00:53.853201 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:00:53.853212 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:00:53.853223 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:00:53.853234 | orchestrator | 2025-09-06 01:00:53.853245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:00:53.853268 | orchestrator | Saturday 06 September 2025 00:57:13 +0000 (0:00:00.618) 0:00:00.881 **** 2025-09-06 01:00:53.853279 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-06 01:00:53.853290 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-06 01:00:53.853301 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-06 01:00:53.853312 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-06 01:00:53.853414 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-06 01:00:53.853427 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-06 01:00:53.853438 | orchestrator | 2025-09-06 01:00:53.853449 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-06 01:00:53.853460 | orchestrator | 2025-09-06 01:00:53.853471 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-06 01:00:53.853482 | orchestrator | Saturday 06 September 2025 00:57:14 +0000 (0:00:00.588) 0:00:01.469 **** 2025-09-06 01:00:53.854234 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:00:53.854278 | orchestrator | 2025-09-06 01:00:53.854293 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-06 01:00:53.854305 | orchestrator | Saturday 06 September 2025 00:57:15 +0000 (0:00:01.051) 0:00:02.521 **** 2025-09-06 01:00:53.854316 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-06 01:00:53.854327 | orchestrator | 2025-09-06 01:00:53.854338 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-06 01:00:53.854349 | orchestrator | Saturday 06 September 2025 00:57:17 +0000 (0:00:02.768) 0:00:05.290 **** 2025-09-06 01:00:53.854359 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-06 01:00:53.854371 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-06 01:00:53.854381 | orchestrator | 2025-09-06 01:00:53.854392 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-06 01:00:53.854403 | orchestrator | Saturday 06 September 2025 00:57:23 +0000 (0:00:05.971) 0:00:11.261 **** 2025-09-06 01:00:53.854414 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:00:53.854425 | orchestrator | 2025-09-06 01:00:53.854436 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-06 01:00:53.854447 | orchestrator | Saturday 06 September 2025 00:57:26 +0000 (0:00:02.914) 0:00:14.176 **** 2025-09-06 01:00:53.854458 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:00:53.854468 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-06 01:00:53.854479 | orchestrator | 2025-09-06 01:00:53.854490 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-06 01:00:53.854501 | orchestrator | Saturday 06 September 2025 00:57:30 +0000 (0:00:03.482) 0:00:17.658 **** 2025-09-06 01:00:53.854511 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:00:53.854522 | orchestrator | 2025-09-06 01:00:53.854558 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-06 01:00:53.854570 | orchestrator | Saturday 06 September 2025 00:57:34 +0000 (0:00:03.727) 0:00:21.385 **** 2025-09-06 01:00:53.854580 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-06 01:00:53.854591 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-06 01:00:53.854602 | orchestrator | 2025-09-06 01:00:53.854613 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-06 01:00:53.854623 | orchestrator | Saturday 06 September 2025 00:57:41 +0000 (0:00:07.430) 0:00:28.816 **** 2025-09-06 01:00:53.854637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.854850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.854871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.854884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.854906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.854918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.854971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.854990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.855002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.855013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.855031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.855043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.855054 | orchestrator | 2025-09-06 01:00:53.855110 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-06 01:00:53.855132 | orchestrator | Saturday 06 September 2025 00:57:44 +0000 (0:00:02.504) 0:00:31.321 **** 2025-09-06 01:00:53.855153 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.855237 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.855249 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.855260 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.855271 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.855281 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.855292 | orchestrator | 2025-09-06 01:00:53.855303 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-06 01:00:53.855313 | orchestrator | Saturday 06 September 2025 00:57:44 +0000 (0:00:00.793) 0:00:32.114 **** 2025-09-06 01:00:53.855324 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.855341 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.855352 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.855363 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:00:53.855373 | orchestrator | 2025-09-06 01:00:53.855384 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-06 01:00:53.855394 | orchestrator | Saturday 06 September 2025 00:57:46 +0000 (0:00:01.770) 0:00:33.885 **** 2025-09-06 01:00:53.855405 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-06 01:00:53.855416 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-06 01:00:53.855427 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-06 01:00:53.855437 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-06 01:00:53.855448 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-06 01:00:53.855458 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-06 01:00:53.855471 | orchestrator | 2025-09-06 01:00:53.855484 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-06 01:00:53.855496 | orchestrator | Saturday 06 September 2025 00:57:48 +0000 (0:00:01.853) 0:00:35.739 **** 2025-09-06 01:00:53.855519 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855535 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855549 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855601 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855621 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855642 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-06 01:00:53.855657 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855670 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855718 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855734 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855755 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855769 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-06 01:00:53.855783 | orchestrator | 2025-09-06 01:00:53.855796 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-06 01:00:53.855809 | orchestrator | Saturday 06 September 2025 00:57:51 +0000 (0:00:03.421) 0:00:39.160 **** 2025-09-06 01:00:53.855822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 01:00:53.855834 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 01:00:53.855845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-06 01:00:53.855855 | orchestrator | 2025-09-06 01:00:53.855866 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-06 01:00:53.855877 | orchestrator | Saturday 06 September 2025 00:57:53 +0000 (0:00:01.993) 0:00:41.154 **** 2025-09-06 01:00:53.855888 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-06 01:00:53.855898 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-06 01:00:53.855909 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-06 01:00:53.855920 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 01:00:53.855930 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 01:00:53.855970 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-06 01:00:53.855983 | orchestrator | 2025-09-06 01:00:53.855993 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-06 01:00:53.856004 | orchestrator | Saturday 06 September 2025 00:57:56 +0000 (0:00:02.678) 0:00:43.833 **** 2025-09-06 01:00:53.856015 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-06 01:00:53.856025 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-06 01:00:53.856036 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-06 01:00:53.856047 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-06 01:00:53.856064 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-06 01:00:53.856075 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-06 01:00:53.856085 | orchestrator | 2025-09-06 01:00:53.856100 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-06 01:00:53.856111 | orchestrator | Saturday 06 September 2025 00:57:57 +0000 (0:00:01.005) 0:00:44.839 **** 2025-09-06 01:00:53.856124 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.856144 | orchestrator | 2025-09-06 01:00:53.856183 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-06 01:00:53.856202 | orchestrator | Saturday 06 September 2025 00:57:57 +0000 (0:00:00.127) 0:00:44.966 **** 2025-09-06 01:00:53.856221 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.856242 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.856255 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.856266 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.856277 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.856287 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.856298 | orchestrator | 2025-09-06 01:00:53.856308 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-06 01:00:53.856319 | orchestrator | Saturday 06 September 2025 00:57:58 +0000 (0:00:01.024) 0:00:45.991 **** 2025-09-06 01:00:53.856330 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:00:53.856342 | orchestrator | 2025-09-06 01:00:53.856352 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-06 01:00:53.856363 | orchestrator | Saturday 06 September 2025 00:57:59 +0000 (0:00:01.037) 0:00:47.029 **** 2025-09-06 01:00:53.856374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.856386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.856398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.856492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.856618 | orchestrator | 2025-09-06 01:00:53.856629 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-06 01:00:53.856640 | orchestrator | Saturday 06 September 2025 00:58:03 +0000 (0:00:03.391) 0:00:50.420 **** 2025-09-06 01:00:53.856651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.856674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.856702 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.856714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856725 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.856736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.856747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856758 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.856769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856812 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.856823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856846 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.856857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856885 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.856896 | orchestrator | 2025-09-06 01:00:53.856906 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-06 01:00:53.856917 | orchestrator | Saturday 06 September 2025 00:58:04 +0000 (0:00:01.501) 0:00:51.922 **** 2025-09-06 01:00:53.856938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.856951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.856963 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.856974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.856985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857001 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.857013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857043 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.857058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.857070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857081 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.857092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857120 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.857137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857187 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.857198 | orchestrator | 2025-09-06 01:00:53.857213 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-06 01:00:53.857232 | orchestrator | Saturday 06 September 2025 00:58:07 +0000 (0:00:02.428) 0:00:54.350 **** 2025-09-06 01:00:53.857253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857457 | orchestrator | 2025-09-06 01:00:53.857468 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-06 01:00:53.857479 | orchestrator | Saturday 06 September 2025 00:58:10 +0000 (0:00:03.305) 0:00:57.655 **** 2025-09-06 01:00:53.857490 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-06 01:00:53.857501 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.857512 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-06 01:00:53.857523 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.857534 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-06 01:00:53.857545 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.857555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-06 01:00:53.857566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-06 01:00:53.857577 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-06 01:00:53.857588 | orchestrator | 2025-09-06 01:00:53.857598 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-06 01:00:53.857609 | orchestrator | Saturday 06 September 2025 00:58:12 +0000 (0:00:02.277) 0:00:59.933 **** 2025-09-06 01:00:53.857620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.857700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.857790 | orchestrator | 2025-09-06 01:00:53.857801 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-06 01:00:53.857812 | orchestrator | Saturday 06 September 2025 00:58:22 +0000 (0:00:09.722) 0:01:09.656 **** 2025-09-06 01:00:53.857828 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.857840 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.857851 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.857862 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:53.857872 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:53.857883 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:53.857894 | orchestrator | 2025-09-06 01:00:53.857904 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-06 01:00:53.857915 | orchestrator | Saturday 06 September 2025 00:58:25 +0000 (0:00:03.139) 0:01:12.795 **** 2025-09-06 01:00:53.857931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.857950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.857973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.857984 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.857995 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.858011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858095 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.858107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-06 01:00:53.858118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858129 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.858141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858188 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.858227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-06 01:00:53.858273 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.858293 | orchestrator | 2025-09-06 01:00:53.858314 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-06 01:00:53.858333 | orchestrator | Saturday 06 September 2025 00:58:26 +0000 (0:00:01.144) 0:01:13.940 **** 2025-09-06 01:00:53.858344 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.858355 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.858365 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.858376 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.858387 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.858397 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.858408 | orchestrator | 2025-09-06 01:00:53.858419 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-06 01:00:53.858429 | orchestrator | Saturday 06 September 2025 00:58:27 +0000 (0:00:00.646) 0:01:14.586 **** 2025-09-06 01:00:53.858440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.858452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.858476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-06 01:00:53.858501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-06 01:00:53.858625 | orchestrator | 2025-09-06 01:00:53.858636 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-06 01:00:53.858647 | orchestrator | Saturday 06 September 2025 00:58:30 +0000 (0:00:02.847) 0:01:17.433 **** 2025-09-06 01:00:53.858658 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.858669 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:00:53.858679 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:00:53.858690 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:00:53.858700 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:00:53.858711 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:00:53.858721 | orchestrator | 2025-09-06 01:00:53.858738 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-06 01:00:53.858749 | orchestrator | Saturday 06 September 2025 00:58:30 +0000 (0:00:00.878) 0:01:18.313 **** 2025-09-06 01:00:53.858759 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:53.858770 | orchestrator | 2025-09-06 01:00:53.858781 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-06 01:00:53.858791 | orchestrator | Saturday 06 September 2025 00:58:33 +0000 (0:00:02.390) 0:01:20.703 **** 2025-09-06 01:00:53.858802 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:53.858813 | orchestrator | 2025-09-06 01:00:53.858824 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-06 01:00:53.858834 | orchestrator | Saturday 06 September 2025 00:58:35 +0000 (0:00:02.159) 0:01:22.863 **** 2025-09-06 01:00:53.858845 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:53.858856 | orchestrator | 2025-09-06 01:00:53.858866 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.858878 | orchestrator | Saturday 06 September 2025 00:58:52 +0000 (0:00:17.355) 0:01:40.219 **** 2025-09-06 01:00:53.858888 | orchestrator | 2025-09-06 01:00:53.858904 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.858915 | orchestrator | Saturday 06 September 2025 00:58:52 +0000 (0:00:00.060) 0:01:40.279 **** 2025-09-06 01:00:53.858926 | orchestrator | 2025-09-06 01:00:53.858937 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.858948 | orchestrator | Saturday 06 September 2025 00:58:53 +0000 (0:00:00.062) 0:01:40.341 **** 2025-09-06 01:00:53.858958 | orchestrator | 2025-09-06 01:00:53.858969 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.858979 | orchestrator | Saturday 06 September 2025 00:58:53 +0000 (0:00:00.062) 0:01:40.403 **** 2025-09-06 01:00:53.858990 | orchestrator | 2025-09-06 01:00:53.859000 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.859015 | orchestrator | Saturday 06 September 2025 00:58:53 +0000 (0:00:00.060) 0:01:40.463 **** 2025-09-06 01:00:53.859026 | orchestrator | 2025-09-06 01:00:53.859037 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-06 01:00:53.859048 | orchestrator | Saturday 06 September 2025 00:58:53 +0000 (0:00:00.059) 0:01:40.523 **** 2025-09-06 01:00:53.859058 | orchestrator | 2025-09-06 01:00:53.859069 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-06 01:00:53.859080 | orchestrator | Saturday 06 September 2025 00:58:53 +0000 (0:00:00.062) 0:01:40.585 **** 2025-09-06 01:00:53.859090 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:53.859101 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:53.859112 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:53.859122 | orchestrator | 2025-09-06 01:00:53.859133 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-06 01:00:53.859144 | orchestrator | Saturday 06 September 2025 00:59:16 +0000 (0:00:23.219) 0:02:03.804 **** 2025-09-06 01:00:53.859154 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:00:53.859221 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:00:53.859232 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:00:53.859243 | orchestrator | 2025-09-06 01:00:53.859253 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-06 01:00:53.859264 | orchestrator | Saturday 06 September 2025 00:59:25 +0000 (0:00:09.438) 0:02:13.243 **** 2025-09-06 01:00:53.859278 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:53.859297 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:53.859315 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:53.859333 | orchestrator | 2025-09-06 01:00:53.859345 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-06 01:00:53.859355 | orchestrator | Saturday 06 September 2025 01:00:37 +0000 (0:01:11.765) 0:03:25.008 **** 2025-09-06 01:00:53.859364 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:00:53.859380 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:00:53.859390 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:00:53.859399 | orchestrator | 2025-09-06 01:00:53.859409 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-06 01:00:53.859418 | orchestrator | Saturday 06 September 2025 01:00:48 +0000 (0:00:11.141) 0:03:36.150 **** 2025-09-06 01:00:53.859428 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:00:53.859437 | orchestrator | 2025-09-06 01:00:53.859447 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:00:53.859456 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-06 01:00:53.859466 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-06 01:00:53.859476 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-06 01:00:53.859485 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-06 01:00:53.859495 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-06 01:00:53.859504 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-06 01:00:53.859514 | orchestrator | 2025-09-06 01:00:53.859523 | orchestrator | 2025-09-06 01:00:53.859533 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:00:53.859542 | orchestrator | Saturday 06 September 2025 01:00:50 +0000 (0:00:01.646) 0:03:37.797 **** 2025-09-06 01:00:53.859552 | orchestrator | =============================================================================== 2025-09-06 01:00:53.859561 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.77s 2025-09-06 01:00:53.859571 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.22s 2025-09-06 01:00:53.859580 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.36s 2025-09-06 01:00:53.859589 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.14s 2025-09-06 01:00:53.859598 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.72s 2025-09-06 01:00:53.859608 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.44s 2025-09-06 01:00:53.859617 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.43s 2025-09-06 01:00:53.859626 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.97s 2025-09-06 01:00:53.859642 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.73s 2025-09-06 01:00:53.859652 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.48s 2025-09-06 01:00:53.859661 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.42s 2025-09-06 01:00:53.859671 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.39s 2025-09-06 01:00:53.859680 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.31s 2025-09-06 01:00:53.859693 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.14s 2025-09-06 01:00:53.859702 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.91s 2025-09-06 01:00:53.859716 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.85s 2025-09-06 01:00:53.859726 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.77s 2025-09-06 01:00:53.859736 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.68s 2025-09-06 01:00:53.859751 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.50s 2025-09-06 01:00:53.859761 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.43s 2025-09-06 01:00:53.859770 | orchestrator | 2025-09-06 01:00:53 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:53.859780 | orchestrator | 2025-09-06 01:00:53 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:53.859790 | orchestrator | 2025-09-06 01:00:53 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:00:53.859799 | orchestrator | 2025-09-06 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:56.881128 | orchestrator | 2025-09-06 01:00:56 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:56.881470 | orchestrator | 2025-09-06 01:00:56 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:56.882091 | orchestrator | 2025-09-06 01:00:56 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:56.883391 | orchestrator | 2025-09-06 01:00:56 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:00:56.883415 | orchestrator | 2025-09-06 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:00:59.912354 | orchestrator | 2025-09-06 01:00:59 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:00:59.912868 | orchestrator | 2025-09-06 01:00:59 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:00:59.913326 | orchestrator | 2025-09-06 01:00:59 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:00:59.915860 | orchestrator | 2025-09-06 01:00:59 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:00:59.915882 | orchestrator | 2025-09-06 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:02.940473 | orchestrator | 2025-09-06 01:01:02 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:02.942830 | orchestrator | 2025-09-06 01:01:02 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:02.943466 | orchestrator | 2025-09-06 01:01:02 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:02.944112 | orchestrator | 2025-09-06 01:01:02 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:02.944250 | orchestrator | 2025-09-06 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:05.967213 | orchestrator | 2025-09-06 01:01:05 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:05.967269 | orchestrator | 2025-09-06 01:01:05 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:05.967618 | orchestrator | 2025-09-06 01:01:05 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:05.968205 | orchestrator | 2025-09-06 01:01:05 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:05.968278 | orchestrator | 2025-09-06 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:08.996251 | orchestrator | 2025-09-06 01:01:08 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:08.996330 | orchestrator | 2025-09-06 01:01:08 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:08.996344 | orchestrator | 2025-09-06 01:01:08 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:08.996379 | orchestrator | 2025-09-06 01:01:08 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:08.996391 | orchestrator | 2025-09-06 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:12.061279 | orchestrator | 2025-09-06 01:01:12 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:12.061746 | orchestrator | 2025-09-06 01:01:12 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:12.062400 | orchestrator | 2025-09-06 01:01:12 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:12.062812 | orchestrator | 2025-09-06 01:01:12 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:12.062836 | orchestrator | 2025-09-06 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:15.099878 | orchestrator | 2025-09-06 01:01:15 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:15.100065 | orchestrator | 2025-09-06 01:01:15 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:15.102740 | orchestrator | 2025-09-06 01:01:15 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:15.103250 | orchestrator | 2025-09-06 01:01:15 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:15.103275 | orchestrator | 2025-09-06 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:18.136843 | orchestrator | 2025-09-06 01:01:18 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:18.136922 | orchestrator | 2025-09-06 01:01:18 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:18.137784 | orchestrator | 2025-09-06 01:01:18 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:18.138504 | orchestrator | 2025-09-06 01:01:18 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:18.138526 | orchestrator | 2025-09-06 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:21.163808 | orchestrator | 2025-09-06 01:01:21 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:21.164062 | orchestrator | 2025-09-06 01:01:21 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:21.164605 | orchestrator | 2025-09-06 01:01:21 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:21.165250 | orchestrator | 2025-09-06 01:01:21 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:21.165271 | orchestrator | 2025-09-06 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:24.199065 | orchestrator | 2025-09-06 01:01:24 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:24.199177 | orchestrator | 2025-09-06 01:01:24 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:24.199192 | orchestrator | 2025-09-06 01:01:24 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:24.199204 | orchestrator | 2025-09-06 01:01:24 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:24.199215 | orchestrator | 2025-09-06 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:27.231941 | orchestrator | 2025-09-06 01:01:27 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:27.233852 | orchestrator | 2025-09-06 01:01:27 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:27.235546 | orchestrator | 2025-09-06 01:01:27 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:27.237089 | orchestrator | 2025-09-06 01:01:27 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:27.237391 | orchestrator | 2025-09-06 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:30.264568 | orchestrator | 2025-09-06 01:01:30 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:30.264838 | orchestrator | 2025-09-06 01:01:30 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:30.265798 | orchestrator | 2025-09-06 01:01:30 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:30.266750 | orchestrator | 2025-09-06 01:01:30 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:30.267360 | orchestrator | 2025-09-06 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:33.290519 | orchestrator | 2025-09-06 01:01:33 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:33.291014 | orchestrator | 2025-09-06 01:01:33 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:33.292428 | orchestrator | 2025-09-06 01:01:33 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:33.293220 | orchestrator | 2025-09-06 01:01:33 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:33.293326 | orchestrator | 2025-09-06 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:36.325639 | orchestrator | 2025-09-06 01:01:36 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:36.326264 | orchestrator | 2025-09-06 01:01:36 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:36.326976 | orchestrator | 2025-09-06 01:01:36 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:36.328692 | orchestrator | 2025-09-06 01:01:36 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:36.328732 | orchestrator | 2025-09-06 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:39.357620 | orchestrator | 2025-09-06 01:01:39 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:39.359213 | orchestrator | 2025-09-06 01:01:39 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:39.359924 | orchestrator | 2025-09-06 01:01:39 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:39.361407 | orchestrator | 2025-09-06 01:01:39 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:39.361449 | orchestrator | 2025-09-06 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:42.384248 | orchestrator | 2025-09-06 01:01:42 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:42.384428 | orchestrator | 2025-09-06 01:01:42 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:42.385071 | orchestrator | 2025-09-06 01:01:42 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:42.385609 | orchestrator | 2025-09-06 01:01:42 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:42.385631 | orchestrator | 2025-09-06 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:45.411208 | orchestrator | 2025-09-06 01:01:45 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:45.411813 | orchestrator | 2025-09-06 01:01:45 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:45.412433 | orchestrator | 2025-09-06 01:01:45 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:45.418943 | orchestrator | 2025-09-06 01:01:45 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:45.418967 | orchestrator | 2025-09-06 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:48.452704 | orchestrator | 2025-09-06 01:01:48 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:48.452776 | orchestrator | 2025-09-06 01:01:48 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:48.452789 | orchestrator | 2025-09-06 01:01:48 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:48.452800 | orchestrator | 2025-09-06 01:01:48 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:48.452811 | orchestrator | 2025-09-06 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:51.502830 | orchestrator | 2025-09-06 01:01:51 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:51.503454 | orchestrator | 2025-09-06 01:01:51 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:51.503863 | orchestrator | 2025-09-06 01:01:51 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:51.504418 | orchestrator | 2025-09-06 01:01:51 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:51.504448 | orchestrator | 2025-09-06 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:54.527829 | orchestrator | 2025-09-06 01:01:54 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:54.527901 | orchestrator | 2025-09-06 01:01:54 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:54.528327 | orchestrator | 2025-09-06 01:01:54 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:54.528939 | orchestrator | 2025-09-06 01:01:54 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:54.528976 | orchestrator | 2025-09-06 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:01:57.554793 | orchestrator | 2025-09-06 01:01:57 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:01:57.554868 | orchestrator | 2025-09-06 01:01:57 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:01:57.555395 | orchestrator | 2025-09-06 01:01:57 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state STARTED 2025-09-06 01:01:57.556124 | orchestrator | 2025-09-06 01:01:57 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:01:57.556146 | orchestrator | 2025-09-06 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:00.583465 | orchestrator | 2025-09-06 01:02:00 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:00.583699 | orchestrator | 2025-09-06 01:02:00 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:00.587526 | orchestrator | 2025-09-06 01:02:00 | INFO  | Task 2b15fb50-50ec-43c9-b8c6-5822e1ca273c is in state SUCCESS 2025-09-06 01:02:00.588874 | orchestrator | 2025-09-06 01:02:00.588928 | orchestrator | 2025-09-06 01:02:00.589007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:02:00.589020 | orchestrator | 2025-09-06 01:02:00.589032 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:02:00.589044 | orchestrator | Saturday 06 September 2025 01:00:11 +0000 (0:00:00.240) 0:00:00.240 **** 2025-09-06 01:02:00.589055 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:02:00.589067 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:02:00.589129 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:02:00.589141 | orchestrator | 2025-09-06 01:02:00.589531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:02:00.589550 | orchestrator | Saturday 06 September 2025 01:00:12 +0000 (0:00:00.258) 0:00:00.498 **** 2025-09-06 01:02:00.589561 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-06 01:02:00.589573 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-06 01:02:00.589583 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-06 01:02:00.589594 | orchestrator | 2025-09-06 01:02:00.589606 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-06 01:02:00.589617 | orchestrator | 2025-09-06 01:02:00.589627 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-06 01:02:00.589639 | orchestrator | Saturday 06 September 2025 01:00:12 +0000 (0:00:00.348) 0:00:00.846 **** 2025-09-06 01:02:00.589649 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:02:00.589661 | orchestrator | 2025-09-06 01:02:00.589672 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-06 01:02:00.589683 | orchestrator | Saturday 06 September 2025 01:00:12 +0000 (0:00:00.508) 0:00:01.354 **** 2025-09-06 01:02:00.589694 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-06 01:02:00.589705 | orchestrator | 2025-09-06 01:02:00.589716 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-06 01:02:00.589727 | orchestrator | Saturday 06 September 2025 01:00:16 +0000 (0:00:03.379) 0:00:04.733 **** 2025-09-06 01:02:00.589737 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-06 01:02:00.589749 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-06 01:02:00.589760 | orchestrator | 2025-09-06 01:02:00.589771 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-06 01:02:00.589781 | orchestrator | Saturday 06 September 2025 01:00:22 +0000 (0:00:06.660) 0:00:11.394 **** 2025-09-06 01:02:00.589792 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:02:00.589803 | orchestrator | 2025-09-06 01:02:00.589814 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-06 01:02:00.589825 | orchestrator | Saturday 06 September 2025 01:00:26 +0000 (0:00:03.315) 0:00:14.709 **** 2025-09-06 01:02:00.589836 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:02:00.589847 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-06 01:02:00.589857 | orchestrator | 2025-09-06 01:02:00.589868 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-06 01:02:00.589879 | orchestrator | Saturday 06 September 2025 01:00:29 +0000 (0:00:03.721) 0:00:18.431 **** 2025-09-06 01:02:00.589890 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:02:00.589900 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-06 01:02:00.589911 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-06 01:02:00.589922 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-06 01:02:00.589933 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-06 01:02:00.589944 | orchestrator | 2025-09-06 01:02:00.589955 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-06 01:02:00.589978 | orchestrator | Saturday 06 September 2025 01:00:45 +0000 (0:00:15.828) 0:00:34.260 **** 2025-09-06 01:02:00.589989 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-06 01:02:00.590000 | orchestrator | 2025-09-06 01:02:00.590011 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-06 01:02:00.590131 | orchestrator | Saturday 06 September 2025 01:00:50 +0000 (0:00:04.985) 0:00:39.246 **** 2025-09-06 01:02:00.590162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590331 | orchestrator | 2025-09-06 01:02:00.590345 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-06 01:02:00.590358 | orchestrator | Saturday 06 September 2025 01:00:53 +0000 (0:00:02.772) 0:00:42.019 **** 2025-09-06 01:02:00.590371 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-06 01:02:00.590385 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-06 01:02:00.590397 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-06 01:02:00.590411 | orchestrator | 2025-09-06 01:02:00.590425 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-06 01:02:00.590437 | orchestrator | Saturday 06 September 2025 01:00:54 +0000 (0:00:01.231) 0:00:43.250 **** 2025-09-06 01:02:00.590451 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.590464 | orchestrator | 2025-09-06 01:02:00.590477 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-06 01:02:00.590496 | orchestrator | Saturday 06 September 2025 01:00:55 +0000 (0:00:00.329) 0:00:43.580 **** 2025-09-06 01:02:00.590507 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.590518 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.590529 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.590540 | orchestrator | 2025-09-06 01:02:00.590551 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-06 01:02:00.590562 | orchestrator | Saturday 06 September 2025 01:00:55 +0000 (0:00:00.473) 0:00:44.054 **** 2025-09-06 01:02:00.590573 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:02:00.590584 | orchestrator | 2025-09-06 01:02:00.590595 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-06 01:02:00.590606 | orchestrator | Saturday 06 September 2025 01:00:56 +0000 (0:00:00.802) 0:00:44.856 **** 2025-09-06 01:02:00.590621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.590665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.590752 | orchestrator | 2025-09-06 01:02:00.590763 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-06 01:02:00.590774 | orchestrator | Saturday 06 September 2025 01:01:00 +0000 (0:00:03.835) 0:00:48.691 **** 2025-09-06 01:02:00.590785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.590804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590831 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.590849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.590861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590890 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.590902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.590913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.590940 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.590951 | orchestrator | 2025-09-06 01:02:00.590962 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-06 01:02:00.590974 | orchestrator | Saturday 06 September 2025 01:01:01 +0000 (0:00:01.232) 0:00:49.924 **** 2025-09-06 01:02:00.590992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591035 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.591046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591111 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.591130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591174 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.591185 | orchestrator | 2025-09-06 01:02:00.591196 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-06 01:02:00.591207 | orchestrator | Saturday 06 September 2025 01:01:03 +0000 (0:00:01.622) 0:00:51.546 **** 2025-09-06 01:02:00.591223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591358 | orchestrator | 2025-09-06 01:02:00.591369 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-06 01:02:00.591380 | orchestrator | Saturday 06 September 2025 01:01:06 +0000 (0:00:03.357) 0:00:54.904 **** 2025-09-06 01:02:00.591391 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.591402 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:02:00.591413 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:02:00.591424 | orchestrator | 2025-09-06 01:02:00.591435 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-06 01:02:00.591446 | orchestrator | Saturday 06 September 2025 01:01:09 +0000 (0:00:02.756) 0:00:57.660 **** 2025-09-06 01:02:00.591457 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:02:00.591468 | orchestrator | 2025-09-06 01:02:00.591479 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-06 01:02:00.591490 | orchestrator | Saturday 06 September 2025 01:01:10 +0000 (0:00:00.928) 0:00:58.588 **** 2025-09-06 01:02:00.591502 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.591512 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.591524 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.591534 | orchestrator | 2025-09-06 01:02:00.591545 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-06 01:02:00.591557 | orchestrator | Saturday 06 September 2025 01:01:11 +0000 (0:00:00.930) 0:00:59.519 **** 2025-09-06 01:02:00.591568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591702 | orchestrator | 2025-09-06 01:02:00.591714 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-06 01:02:00.591725 | orchestrator | Saturday 06 September 2025 01:01:18 +0000 (0:00:07.822) 0:01:07.341 **** 2025-09-06 01:02:00.591743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591779 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.591791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591844 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.591855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-06 01:02:00.591867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:02:00.591890 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.591901 | orchestrator | 2025-09-06 01:02:00.591912 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-06 01:02:00.591923 | orchestrator | Saturday 06 September 2025 01:01:19 +0000 (0:00:00.910) 0:01:08.252 **** 2025-09-06 01:02:00.591939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-06 01:02:00.591987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.591999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.592014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.592036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.592054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.592067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:02:00.592124 | orchestrator | 2025-09-06 01:02:00.592137 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-06 01:02:00.592148 | orchestrator | Saturday 06 September 2025 01:01:22 +0000 (0:00:02.959) 0:01:11.211 **** 2025-09-06 01:02:00.592159 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:02:00.592170 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:02:00.592181 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:02:00.592192 | orchestrator | 2025-09-06 01:02:00.592203 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-06 01:02:00.592214 | orchestrator | Saturday 06 September 2025 01:01:23 +0000 (0:00:00.450) 0:01:11.661 **** 2025-09-06 01:02:00.592225 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592236 | orchestrator | 2025-09-06 01:02:00.592247 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-06 01:02:00.592258 | orchestrator | Saturday 06 September 2025 01:01:25 +0000 (0:00:02.109) 0:01:13.771 **** 2025-09-06 01:02:00.592268 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592278 | orchestrator | 2025-09-06 01:02:00.592287 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-06 01:02:00.592297 | orchestrator | Saturday 06 September 2025 01:01:27 +0000 (0:00:02.228) 0:01:16.000 **** 2025-09-06 01:02:00.592307 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592317 | orchestrator | 2025-09-06 01:02:00.592326 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-06 01:02:00.592336 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:11.104) 0:01:27.105 **** 2025-09-06 01:02:00.592346 | orchestrator | 2025-09-06 01:02:00.592361 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-06 01:02:00.592371 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:00.062) 0:01:27.167 **** 2025-09-06 01:02:00.592381 | orchestrator | 2025-09-06 01:02:00.592391 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-06 01:02:00.592401 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:00.053) 0:01:27.221 **** 2025-09-06 01:02:00.592410 | orchestrator | 2025-09-06 01:02:00.592420 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-06 01:02:00.592430 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:00.049) 0:01:27.270 **** 2025-09-06 01:02:00.592440 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592449 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:02:00.592459 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:02:00.592469 | orchestrator | 2025-09-06 01:02:00.592479 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-06 01:02:00.592489 | orchestrator | Saturday 06 September 2025 01:01:46 +0000 (0:00:07.541) 0:01:34.812 **** 2025-09-06 01:02:00.592498 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592508 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:02:00.592518 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:02:00.592528 | orchestrator | 2025-09-06 01:02:00.592538 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-06 01:02:00.592548 | orchestrator | Saturday 06 September 2025 01:01:54 +0000 (0:00:07.861) 0:01:42.674 **** 2025-09-06 01:02:00.592557 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:02:00.592567 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:02:00.592581 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:02:00.592591 | orchestrator | 2025-09-06 01:02:00.592601 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:02:00.592611 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:02:00.592622 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:02:00.592632 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:02:00.592641 | orchestrator | 2025-09-06 01:02:00.592651 | orchestrator | 2025-09-06 01:02:00.592661 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:02:00.592671 | orchestrator | Saturday 06 September 2025 01:01:59 +0000 (0:00:05.404) 0:01:48.078 **** 2025-09-06 01:02:00.592681 | orchestrator | =============================================================================== 2025-09-06 01:02:00.592691 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.83s 2025-09-06 01:02:00.592706 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.10s 2025-09-06 01:02:00.592716 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.86s 2025-09-06 01:02:00.592726 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.82s 2025-09-06 01:02:00.592735 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.54s 2025-09-06 01:02:00.592745 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.66s 2025-09-06 01:02:00.592755 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.40s 2025-09-06 01:02:00.592764 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.99s 2025-09-06 01:02:00.592774 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.84s 2025-09-06 01:02:00.592783 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.72s 2025-09-06 01:02:00.592793 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.38s 2025-09-06 01:02:00.592809 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.36s 2025-09-06 01:02:00.592819 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.32s 2025-09-06 01:02:00.592829 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.96s 2025-09-06 01:02:00.592838 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.77s 2025-09-06 01:02:00.592848 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.76s 2025-09-06 01:02:00.592858 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.23s 2025-09-06 01:02:00.592867 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2025-09-06 01:02:00.592877 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.62s 2025-09-06 01:02:00.592887 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.23s 2025-09-06 01:02:00.592896 | orchestrator | 2025-09-06 01:02:00 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:00.592906 | orchestrator | 2025-09-06 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:03.609697 | orchestrator | 2025-09-06 01:02:03 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:03.612001 | orchestrator | 2025-09-06 01:02:03 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:03.613598 | orchestrator | 2025-09-06 01:02:03 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:03.614616 | orchestrator | 2025-09-06 01:02:03 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:03.615223 | orchestrator | 2025-09-06 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:06.649837 | orchestrator | 2025-09-06 01:02:06 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:06.651558 | orchestrator | 2025-09-06 01:02:06 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:06.652378 | orchestrator | 2025-09-06 01:02:06 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:06.654150 | orchestrator | 2025-09-06 01:02:06 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:06.654174 | orchestrator | 2025-09-06 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:09.697308 | orchestrator | 2025-09-06 01:02:09 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:09.697678 | orchestrator | 2025-09-06 01:02:09 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:09.698338 | orchestrator | 2025-09-06 01:02:09 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:09.698968 | orchestrator | 2025-09-06 01:02:09 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:09.699138 | orchestrator | 2025-09-06 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:12.725158 | orchestrator | 2025-09-06 01:02:12 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:12.726163 | orchestrator | 2025-09-06 01:02:12 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:12.726955 | orchestrator | 2025-09-06 01:02:12 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:12.727931 | orchestrator | 2025-09-06 01:02:12 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:12.727953 | orchestrator | 2025-09-06 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:15.760728 | orchestrator | 2025-09-06 01:02:15 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:15.762199 | orchestrator | 2025-09-06 01:02:15 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:15.764164 | orchestrator | 2025-09-06 01:02:15 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:15.766659 | orchestrator | 2025-09-06 01:02:15 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:15.767082 | orchestrator | 2025-09-06 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:18.800384 | orchestrator | 2025-09-06 01:02:18 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:18.800458 | orchestrator | 2025-09-06 01:02:18 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:18.802403 | orchestrator | 2025-09-06 01:02:18 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:18.806329 | orchestrator | 2025-09-06 01:02:18 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:18.806360 | orchestrator | 2025-09-06 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:21.837241 | orchestrator | 2025-09-06 01:02:21 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:21.840655 | orchestrator | 2025-09-06 01:02:21 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:21.843518 | orchestrator | 2025-09-06 01:02:21 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:21.845773 | orchestrator | 2025-09-06 01:02:21 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:21.846257 | orchestrator | 2025-09-06 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:24.901267 | orchestrator | 2025-09-06 01:02:24 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:24.904020 | orchestrator | 2025-09-06 01:02:24 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:24.906583 | orchestrator | 2025-09-06 01:02:24 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:24.908488 | orchestrator | 2025-09-06 01:02:24 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:24.908725 | orchestrator | 2025-09-06 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:27.955517 | orchestrator | 2025-09-06 01:02:27 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:27.956624 | orchestrator | 2025-09-06 01:02:27 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:27.958765 | orchestrator | 2025-09-06 01:02:27 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:27.959845 | orchestrator | 2025-09-06 01:02:27 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:27.959868 | orchestrator | 2025-09-06 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:30.994559 | orchestrator | 2025-09-06 01:02:30 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:30.996167 | orchestrator | 2025-09-06 01:02:30 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:30.998800 | orchestrator | 2025-09-06 01:02:31 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:31.000366 | orchestrator | 2025-09-06 01:02:31 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:31.000391 | orchestrator | 2025-09-06 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:34.050616 | orchestrator | 2025-09-06 01:02:34 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:34.050954 | orchestrator | 2025-09-06 01:02:34 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:34.051971 | orchestrator | 2025-09-06 01:02:34 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:34.053144 | orchestrator | 2025-09-06 01:02:34 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:34.053177 | orchestrator | 2025-09-06 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:37.086864 | orchestrator | 2025-09-06 01:02:37 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:37.089638 | orchestrator | 2025-09-06 01:02:37 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:37.092813 | orchestrator | 2025-09-06 01:02:37 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:37.094199 | orchestrator | 2025-09-06 01:02:37 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:37.094459 | orchestrator | 2025-09-06 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:40.147413 | orchestrator | 2025-09-06 01:02:40 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:40.156365 | orchestrator | 2025-09-06 01:02:40 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:40.161097 | orchestrator | 2025-09-06 01:02:40 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:40.164357 | orchestrator | 2025-09-06 01:02:40 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:40.164383 | orchestrator | 2025-09-06 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:43.197284 | orchestrator | 2025-09-06 01:02:43 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:43.197466 | orchestrator | 2025-09-06 01:02:43 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:43.198368 | orchestrator | 2025-09-06 01:02:43 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:43.199948 | orchestrator | 2025-09-06 01:02:43 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:43.199975 | orchestrator | 2025-09-06 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:46.241946 | orchestrator | 2025-09-06 01:02:46 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:46.241996 | orchestrator | 2025-09-06 01:02:46 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:46.243112 | orchestrator | 2025-09-06 01:02:46 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:46.243331 | orchestrator | 2025-09-06 01:02:46 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:46.243572 | orchestrator | 2025-09-06 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:49.277375 | orchestrator | 2025-09-06 01:02:49 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:49.278724 | orchestrator | 2025-09-06 01:02:49 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:49.279797 | orchestrator | 2025-09-06 01:02:49 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:49.281355 | orchestrator | 2025-09-06 01:02:49 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:49.281592 | orchestrator | 2025-09-06 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:52.303976 | orchestrator | 2025-09-06 01:02:52 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:52.304471 | orchestrator | 2025-09-06 01:02:52 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:52.305083 | orchestrator | 2025-09-06 01:02:52 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:52.305809 | orchestrator | 2025-09-06 01:02:52 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:52.305836 | orchestrator | 2025-09-06 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:55.326878 | orchestrator | 2025-09-06 01:02:55 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:55.327295 | orchestrator | 2025-09-06 01:02:55 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:55.328048 | orchestrator | 2025-09-06 01:02:55 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:55.329599 | orchestrator | 2025-09-06 01:02:55 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:55.329630 | orchestrator | 2025-09-06 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:02:58.352626 | orchestrator | 2025-09-06 01:02:58 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:02:58.354629 | orchestrator | 2025-09-06 01:02:58 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:02:58.355607 | orchestrator | 2025-09-06 01:02:58 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:02:58.356520 | orchestrator | 2025-09-06 01:02:58 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:02:58.356615 | orchestrator | 2025-09-06 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:01.387766 | orchestrator | 2025-09-06 01:03:01 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:01.388249 | orchestrator | 2025-09-06 01:03:01 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:01.388885 | orchestrator | 2025-09-06 01:03:01 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:01.389712 | orchestrator | 2025-09-06 01:03:01 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:01.389734 | orchestrator | 2025-09-06 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:04.421371 | orchestrator | 2025-09-06 01:03:04 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:04.421459 | orchestrator | 2025-09-06 01:03:04 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:04.421934 | orchestrator | 2025-09-06 01:03:04 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:04.422590 | orchestrator | 2025-09-06 01:03:04 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:04.423347 | orchestrator | 2025-09-06 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:07.455848 | orchestrator | 2025-09-06 01:03:07 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:07.456543 | orchestrator | 2025-09-06 01:03:07 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:07.457790 | orchestrator | 2025-09-06 01:03:07 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:07.459189 | orchestrator | 2025-09-06 01:03:07 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:07.459219 | orchestrator | 2025-09-06 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:10.493796 | orchestrator | 2025-09-06 01:03:10 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:10.494375 | orchestrator | 2025-09-06 01:03:10 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:10.495296 | orchestrator | 2025-09-06 01:03:10 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:10.496389 | orchestrator | 2025-09-06 01:03:10 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:10.496411 | orchestrator | 2025-09-06 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:13.542785 | orchestrator | 2025-09-06 01:03:13 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:13.543783 | orchestrator | 2025-09-06 01:03:13 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:13.545328 | orchestrator | 2025-09-06 01:03:13 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:13.546660 | orchestrator | 2025-09-06 01:03:13 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:13.546702 | orchestrator | 2025-09-06 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:16.582881 | orchestrator | 2025-09-06 01:03:16 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:16.584558 | orchestrator | 2025-09-06 01:03:16 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:16.587213 | orchestrator | 2025-09-06 01:03:16 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:16.589187 | orchestrator | 2025-09-06 01:03:16 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:16.589217 | orchestrator | 2025-09-06 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:19.628545 | orchestrator | 2025-09-06 01:03:19 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:19.628649 | orchestrator | 2025-09-06 01:03:19 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:19.629761 | orchestrator | 2025-09-06 01:03:19 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:19.631357 | orchestrator | 2025-09-06 01:03:19 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:19.632049 | orchestrator | 2025-09-06 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:22.656098 | orchestrator | 2025-09-06 01:03:22 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:22.656545 | orchestrator | 2025-09-06 01:03:22 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:22.657120 | orchestrator | 2025-09-06 01:03:22 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:22.658801 | orchestrator | 2025-09-06 01:03:22 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:22.658832 | orchestrator | 2025-09-06 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:25.686788 | orchestrator | 2025-09-06 01:03:25 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:25.687321 | orchestrator | 2025-09-06 01:03:25 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:25.687823 | orchestrator | 2025-09-06 01:03:25 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:25.688930 | orchestrator | 2025-09-06 01:03:25 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:25.688939 | orchestrator | 2025-09-06 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:28.728298 | orchestrator | 2025-09-06 01:03:28 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:28.729855 | orchestrator | 2025-09-06 01:03:28 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:28.732321 | orchestrator | 2025-09-06 01:03:28 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:28.734178 | orchestrator | 2025-09-06 01:03:28 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:28.734214 | orchestrator | 2025-09-06 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:31.779376 | orchestrator | 2025-09-06 01:03:31 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:31.780475 | orchestrator | 2025-09-06 01:03:31 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:31.782189 | orchestrator | 2025-09-06 01:03:31 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:31.783428 | orchestrator | 2025-09-06 01:03:31 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:31.783529 | orchestrator | 2025-09-06 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:34.819012 | orchestrator | 2025-09-06 01:03:34 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:34.820459 | orchestrator | 2025-09-06 01:03:34 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:34.822704 | orchestrator | 2025-09-06 01:03:34 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:34.823699 | orchestrator | 2025-09-06 01:03:34 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:34.824412 | orchestrator | 2025-09-06 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:37.869653 | orchestrator | 2025-09-06 01:03:37 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:37.871204 | orchestrator | 2025-09-06 01:03:37 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:37.872794 | orchestrator | 2025-09-06 01:03:37 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:37.874529 | orchestrator | 2025-09-06 01:03:37 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:37.874628 | orchestrator | 2025-09-06 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:40.913861 | orchestrator | 2025-09-06 01:03:40 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:40.915524 | orchestrator | 2025-09-06 01:03:40 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:40.917119 | orchestrator | 2025-09-06 01:03:40 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:40.918726 | orchestrator | 2025-09-06 01:03:40 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:40.918893 | orchestrator | 2025-09-06 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:43.971824 | orchestrator | 2025-09-06 01:03:43 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:43.973815 | orchestrator | 2025-09-06 01:03:43 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:43.975629 | orchestrator | 2025-09-06 01:03:43 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state STARTED 2025-09-06 01:03:43.977770 | orchestrator | 2025-09-06 01:03:43 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state STARTED 2025-09-06 01:03:43.978176 | orchestrator | 2025-09-06 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:47.019590 | orchestrator | 2025-09-06 01:03:47 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:47.022360 | orchestrator | 2025-09-06 01:03:47 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:03:47.024092 | orchestrator | 2025-09-06 01:03:47 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:47.027818 | orchestrator | 2025-09-06 01:03:47 | INFO  | Task 65a07530-cba1-41b6-a319-66083b21db1f is in state SUCCESS 2025-09-06 01:03:47.030426 | orchestrator | 2025-09-06 01:03:47.030464 | orchestrator | 2025-09-06 01:03:47.030477 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:03:47.030490 | orchestrator | 2025-09-06 01:03:47.030501 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:03:47.030513 | orchestrator | Saturday 06 September 2025 00:59:46 +0000 (0:00:00.232) 0:00:00.232 **** 2025-09-06 01:03:47.030525 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:03:47.030537 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:03:47.030547 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:03:47.030558 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:03:47.030569 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:03:47.030580 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:03:47.030591 | orchestrator | 2025-09-06 01:03:47.030602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:03:47.030613 | orchestrator | Saturday 06 September 2025 00:59:46 +0000 (0:00:00.591) 0:00:00.823 **** 2025-09-06 01:03:47.030624 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-06 01:03:47.030635 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-06 01:03:47.030646 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-06 01:03:47.030657 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-06 01:03:47.030667 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-06 01:03:47.030678 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-06 01:03:47.030689 | orchestrator | 2025-09-06 01:03:47.030700 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-06 01:03:47.030710 | orchestrator | 2025-09-06 01:03:47.030721 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-06 01:03:47.030732 | orchestrator | Saturday 06 September 2025 00:59:47 +0000 (0:00:00.535) 0:00:01.359 **** 2025-09-06 01:03:47.030745 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:03:47.030757 | orchestrator | 2025-09-06 01:03:47.030768 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-06 01:03:47.030807 | orchestrator | Saturday 06 September 2025 00:59:48 +0000 (0:00:01.032) 0:00:02.391 **** 2025-09-06 01:03:47.030819 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:03:47.030830 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:03:47.030840 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:03:47.030851 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:03:47.030862 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:03:47.030873 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:03:47.030883 | orchestrator | 2025-09-06 01:03:47.030894 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-06 01:03:47.030953 | orchestrator | Saturday 06 September 2025 00:59:49 +0000 (0:00:01.136) 0:00:03.528 **** 2025-09-06 01:03:47.030967 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:03:47.030978 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:03:47.030989 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:03:47.031000 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:03:47.031010 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:03:47.031021 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:03:47.031035 | orchestrator | 2025-09-06 01:03:47.031049 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-06 01:03:47.031061 | orchestrator | Saturday 06 September 2025 00:59:50 +0000 (0:00:00.999) 0:00:04.527 **** 2025-09-06 01:03:47.031074 | orchestrator | ok: [testbed-node-0] => { 2025-09-06 01:03:47.031088 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031101 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031114 | orchestrator | } 2025-09-06 01:03:47.031127 | orchestrator | ok: [testbed-node-1] => { 2025-09-06 01:03:47.031140 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031152 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031164 | orchestrator | } 2025-09-06 01:03:47.031177 | orchestrator | ok: [testbed-node-2] => { 2025-09-06 01:03:47.031190 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031204 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031216 | orchestrator | } 2025-09-06 01:03:47.031229 | orchestrator | ok: [testbed-node-3] => { 2025-09-06 01:03:47.031241 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031296 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031310 | orchestrator | } 2025-09-06 01:03:47.031322 | orchestrator | ok: [testbed-node-4] => { 2025-09-06 01:03:47.031335 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031347 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031361 | orchestrator | } 2025-09-06 01:03:47.031374 | orchestrator | ok: [testbed-node-5] => { 2025-09-06 01:03:47.031385 | orchestrator |  "changed": false, 2025-09-06 01:03:47.031396 | orchestrator |  "msg": "All assertions passed" 2025-09-06 01:03:47.031407 | orchestrator | } 2025-09-06 01:03:47.031418 | orchestrator | 2025-09-06 01:03:47.031429 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-06 01:03:47.031440 | orchestrator | Saturday 06 September 2025 00:59:51 +0000 (0:00:00.749) 0:00:05.276 **** 2025-09-06 01:03:47.031451 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.031461 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.031472 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.031483 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.031494 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.031504 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.031515 | orchestrator | 2025-09-06 01:03:47.031526 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-06 01:03:47.031537 | orchestrator | Saturday 06 September 2025 00:59:51 +0000 (0:00:00.609) 0:00:05.885 **** 2025-09-06 01:03:47.031548 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-06 01:03:47.031559 | orchestrator | 2025-09-06 01:03:47.031570 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-06 01:03:47.031581 | orchestrator | Saturday 06 September 2025 00:59:55 +0000 (0:00:03.514) 0:00:09.400 **** 2025-09-06 01:03:47.031601 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-06 01:03:47.031748 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-06 01:03:47.031763 | orchestrator | 2025-09-06 01:03:47.031787 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-06 01:03:47.031799 | orchestrator | Saturday 06 September 2025 01:00:01 +0000 (0:00:06.492) 0:00:15.892 **** 2025-09-06 01:03:47.031810 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:03:47.031821 | orchestrator | 2025-09-06 01:03:47.031832 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-06 01:03:47.031843 | orchestrator | Saturday 06 September 2025 01:00:04 +0000 (0:00:03.231) 0:00:19.124 **** 2025-09-06 01:03:47.031853 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:03:47.031864 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-06 01:03:47.031875 | orchestrator | 2025-09-06 01:03:47.031886 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-06 01:03:47.031896 | orchestrator | Saturday 06 September 2025 01:00:08 +0000 (0:00:03.750) 0:00:22.875 **** 2025-09-06 01:03:47.031907 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:03:47.031939 | orchestrator | 2025-09-06 01:03:47.031951 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-06 01:03:47.031961 | orchestrator | Saturday 06 September 2025 01:00:12 +0000 (0:00:03.379) 0:00:26.254 **** 2025-09-06 01:03:47.031972 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-06 01:03:47.031983 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-06 01:03:47.031994 | orchestrator | 2025-09-06 01:03:47.032004 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-06 01:03:47.032015 | orchestrator | Saturday 06 September 2025 01:00:20 +0000 (0:00:08.275) 0:00:34.529 **** 2025-09-06 01:03:47.032026 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.032037 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.032047 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.032058 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.032069 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.032079 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.032090 | orchestrator | 2025-09-06 01:03:47.032101 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-06 01:03:47.032112 | orchestrator | Saturday 06 September 2025 01:00:21 +0000 (0:00:00.770) 0:00:35.300 **** 2025-09-06 01:03:47.032122 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.032133 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.032144 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.032155 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.032165 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.032176 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.032187 | orchestrator | 2025-09-06 01:03:47.032197 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-06 01:03:47.032215 | orchestrator | Saturday 06 September 2025 01:00:23 +0000 (0:00:02.030) 0:00:37.330 **** 2025-09-06 01:03:47.032227 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:03:47.032237 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:03:47.032248 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:03:47.032259 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:03:47.032303 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:03:47.032315 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:03:47.032326 | orchestrator | 2025-09-06 01:03:47.032337 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-06 01:03:47.032348 | orchestrator | Saturday 06 September 2025 01:00:24 +0000 (0:00:01.083) 0:00:38.414 **** 2025-09-06 01:03:47.032359 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.032386 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.032405 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.032416 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.032427 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.032438 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.032448 | orchestrator | 2025-09-06 01:03:47.032459 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-06 01:03:47.032470 | orchestrator | Saturday 06 September 2025 01:00:26 +0000 (0:00:01.941) 0:00:40.355 **** 2025-09-06 01:03:47.032485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032585 | orchestrator | 2025-09-06 01:03:47.032596 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-06 01:03:47.032607 | orchestrator | Saturday 06 September 2025 01:00:28 +0000 (0:00:02.654) 0:00:43.010 **** 2025-09-06 01:03:47.032618 | orchestrator | [WARNING]: Skipped 2025-09-06 01:03:47.032630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-06 01:03:47.032642 | orchestrator | due to this access issue: 2025-09-06 01:03:47.032653 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-06 01:03:47.032663 | orchestrator | a directory 2025-09-06 01:03:47.032674 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:03:47.032685 | orchestrator | 2025-09-06 01:03:47.032696 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-06 01:03:47.032712 | orchestrator | Saturday 06 September 2025 01:00:29 +0000 (0:00:00.822) 0:00:43.832 **** 2025-09-06 01:03:47.032724 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:03:47.032736 | orchestrator | 2025-09-06 01:03:47.032747 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-06 01:03:47.032758 | orchestrator | Saturday 06 September 2025 01:00:31 +0000 (0:00:01.334) 0:00:45.166 **** 2025-09-06 01:03:47.032769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.032817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.032860 | orchestrator | 2025-09-06 01:03:47.032871 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-06 01:03:47.032889 | orchestrator | Saturday 06 September 2025 01:00:33 +0000 (0:00:02.722) 0:00:47.889 **** 2025-09-06 01:03:47.032906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.032934 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.032946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.032957 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.032969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.032986 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.032999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033010 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.033021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033042 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.033059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033070 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.033081 | orchestrator | 2025-09-06 01:03:47.033092 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-06 01:03:47.033103 | orchestrator | Saturday 06 September 2025 01:00:36 +0000 (0:00:02.482) 0:00:50.371 **** 2025-09-06 01:03:47.033115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.033126 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.033145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.033157 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.033169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.033196 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.033213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033224 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.033235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033247 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.033258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033269 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.033280 | orchestrator | 2025-09-06 01:03:47.033291 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-06 01:03:47.033302 | orchestrator | Saturday 06 September 2025 01:00:38 +0000 (0:00:02.570) 0:00:52.942 **** 2025-09-06 01:03:47.033313 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.033324 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.033335 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.033345 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.033356 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.033366 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.033377 | orchestrator | 2025-09-06 01:03:47.033388 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-06 01:03:47.033405 | orchestrator | Saturday 06 September 2025 01:00:41 +0000 (0:00:02.632) 0:00:55.574 **** 2025-09-06 01:03:47.033416 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.033435 | orchestrator | 2025-09-06 01:03:47.033446 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-06 01:03:47.033457 | orchestrator | Saturday 06 September 2025 01:00:41 +0000 (0:00:00.090) 0:00:55.665 **** 2025-09-06 01:03:47.033467 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.033478 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.033489 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.033500 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.033510 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.033521 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.033532 | orchestrator | 2025-09-06 01:03:47.033542 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-06 01:03:47.033553 | orchestrator | Saturday 06 September 2025 01:00:42 +0000 (0:00:00.509) 0:00:56.174 **** 2025-09-06 01:03:47.033564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.033575 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.033591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.033602 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.033614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.033625 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.034174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.034215 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.034228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.034241 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.034253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.034265 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.034277 | orchestrator | 2025-09-06 01:03:47.034289 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-06 01:03:47.034300 | orchestrator | Saturday 06 September 2025 01:00:43 +0000 (0:00:01.718) 0:00:57.893 **** 2025-09-06 01:03:47.034318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034415 | orchestrator | 2025-09-06 01:03:47.034426 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-06 01:03:47.034437 | orchestrator | Saturday 06 September 2025 01:00:47 +0000 (0:00:03.606) 0:01:01.499 **** 2025-09-06 01:03:47.034449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.034588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.034609 | orchestrator | 2025-09-06 01:03:47.034620 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-06 01:03:47.034631 | orchestrator | Saturday 06 September 2025 01:00:53 +0000 (0:00:06.395) 0:01:07.895 **** 2025-09-06 01:03:47.034654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.034666 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.034678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.034689 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.034705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.034716 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.034728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.034746 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.034758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.034771 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.034792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.034806 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.034819 | orchestrator | 2025-09-06 01:03:47.034832 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-06 01:03:47.034847 | orchestrator | Saturday 06 September 2025 01:00:57 +0000 (0:00:03.566) 0:01:11.462 **** 2025-09-06 01:03:47.034859 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.034872 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.034885 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.034898 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.034932 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.034946 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.034959 | orchestrator | 2025-09-06 01:03:47.034973 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-06 01:03:47.034985 | orchestrator | Saturday 06 September 2025 01:01:00 +0000 (0:00:03.208) 0:01:14.671 **** 2025-09-06 01:03:47.035004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.035019 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.035054 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.035082 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.035119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.035139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.035158 | orchestrator | 2025-09-06 01:03:47.035170 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-06 01:03:47.035181 | orchestrator | Saturday 06 September 2025 01:01:04 +0000 (0:00:03.839) 0:01:18.510 **** 2025-09-06 01:03:47.035192 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035202 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035213 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035224 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035235 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035245 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035256 | orchestrator | 2025-09-06 01:03:47.035267 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-06 01:03:47.035278 | orchestrator | Saturday 06 September 2025 01:01:06 +0000 (0:00:02.338) 0:01:20.849 **** 2025-09-06 01:03:47.035288 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035299 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035309 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035320 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035422 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035434 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035444 | orchestrator | 2025-09-06 01:03:47.035455 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-06 01:03:47.035466 | orchestrator | Saturday 06 September 2025 01:01:09 +0000 (0:00:02.666) 0:01:23.516 **** 2025-09-06 01:03:47.035477 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035487 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035498 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035508 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035519 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035530 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035540 | orchestrator | 2025-09-06 01:03:47.035551 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-06 01:03:47.035561 | orchestrator | Saturday 06 September 2025 01:01:11 +0000 (0:00:02.191) 0:01:25.707 **** 2025-09-06 01:03:47.035572 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035583 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035593 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035604 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035614 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035625 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035636 | orchestrator | 2025-09-06 01:03:47.035646 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-06 01:03:47.035657 | orchestrator | Saturday 06 September 2025 01:01:13 +0000 (0:00:02.319) 0:01:28.027 **** 2025-09-06 01:03:47.035668 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035679 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035689 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035700 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035717 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035728 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035739 | orchestrator | 2025-09-06 01:03:47.035750 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-06 01:03:47.035761 | orchestrator | Saturday 06 September 2025 01:01:16 +0000 (0:00:02.138) 0:01:30.165 **** 2025-09-06 01:03:47.035772 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035782 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035793 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.035803 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.035822 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.035833 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.035843 | orchestrator | 2025-09-06 01:03:47.035855 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-06 01:03:47.035865 | orchestrator | Saturday 06 September 2025 01:01:18 +0000 (0:00:02.061) 0:01:32.227 **** 2025-09-06 01:03:47.035876 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.035887 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.035898 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.035908 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.035985 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.035997 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036008 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.036018 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036029 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.036041 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036055 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-06 01:03:47.036068 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036081 | orchestrator | 2025-09-06 01:03:47.036094 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-06 01:03:47.036107 | orchestrator | Saturday 06 September 2025 01:01:20 +0000 (0:00:01.991) 0:01:34.218 **** 2025-09-06 01:03:47.036127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036141 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036169 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036211 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036236 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036263 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036286 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036297 | orchestrator | 2025-09-06 01:03:47.036308 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-06 01:03:47.036318 | orchestrator | Saturday 06 September 2025 01:01:21 +0000 (0:00:01.829) 0:01:36.048 **** 2025-09-06 01:03:47.036330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036348 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036378 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036401 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.036428 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036450 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.036481 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036492 | orchestrator | 2025-09-06 01:03:47.036502 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-06 01:03:47.036512 | orchestrator | Saturday 06 September 2025 01:01:23 +0000 (0:00:01.788) 0:01:37.836 **** 2025-09-06 01:03:47.036521 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036535 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036545 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036555 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036564 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036574 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036583 | orchestrator | 2025-09-06 01:03:47.036593 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-06 01:03:47.036603 | orchestrator | Saturday 06 September 2025 01:01:25 +0000 (0:00:01.953) 0:01:39.790 **** 2025-09-06 01:03:47.036612 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036622 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036631 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036641 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:03:47.036650 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:03:47.036659 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:03:47.036669 | orchestrator | 2025-09-06 01:03:47.036679 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-06 01:03:47.036689 | orchestrator | Saturday 06 September 2025 01:01:28 +0000 (0:00:03.199) 0:01:42.989 **** 2025-09-06 01:03:47.036698 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036708 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036717 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036727 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036736 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036745 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036755 | orchestrator | 2025-09-06 01:03:47.036764 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-06 01:03:47.036774 | orchestrator | Saturday 06 September 2025 01:01:32 +0000 (0:00:03.201) 0:01:46.190 **** 2025-09-06 01:03:47.036784 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036793 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036803 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036812 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036822 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036831 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036841 | orchestrator | 2025-09-06 01:03:47.036850 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-06 01:03:47.036860 | orchestrator | Saturday 06 September 2025 01:01:34 +0000 (0:00:02.095) 0:01:48.286 **** 2025-09-06 01:03:47.036870 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036879 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.036889 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.036898 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036908 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.036931 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.036941 | orchestrator | 2025-09-06 01:03:47.036950 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-06 01:03:47.036971 | orchestrator | Saturday 06 September 2025 01:01:36 +0000 (0:00:01.881) 0:01:50.167 **** 2025-09-06 01:03:47.036980 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.036990 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.036999 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037009 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.037018 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.037027 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.037037 | orchestrator | 2025-09-06 01:03:47.037046 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-06 01:03:47.037420 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:02.656) 0:01:52.823 **** 2025-09-06 01:03:47.037438 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.037448 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037458 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.037467 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.037477 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.037487 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.037497 | orchestrator | 2025-09-06 01:03:47.037506 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-06 01:03:47.037516 | orchestrator | Saturday 06 September 2025 01:01:41 +0000 (0:00:02.995) 0:01:55.819 **** 2025-09-06 01:03:47.037526 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.037535 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.037545 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037555 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.037564 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.037574 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.037584 | orchestrator | 2025-09-06 01:03:47.037593 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-06 01:03:47.037603 | orchestrator | Saturday 06 September 2025 01:01:43 +0000 (0:00:02.040) 0:01:57.859 **** 2025-09-06 01:03:47.037612 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.037622 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037631 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.037641 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.037650 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.037659 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.037669 | orchestrator | 2025-09-06 01:03:47.037679 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-06 01:03:47.037688 | orchestrator | Saturday 06 September 2025 01:01:45 +0000 (0:00:02.113) 0:01:59.972 **** 2025-09-06 01:03:47.037698 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037708 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037718 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037727 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.037737 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037747 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.037756 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037766 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.037782 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037792 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.037801 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-06 01:03:47.037811 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.037855 | orchestrator | 2025-09-06 01:03:47.037865 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-06 01:03:47.037885 | orchestrator | Saturday 06 September 2025 01:01:49 +0000 (0:00:03.953) 0:02:03.925 **** 2025-09-06 01:03:47.037896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.037906 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.037941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.037952 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.037962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-06 01:03:47.037972 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.037983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.038000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.038059 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.038073 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.038085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-06 01:03:47.038099 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.038111 | orchestrator | 2025-09-06 01:03:47.038122 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-06 01:03:47.038133 | orchestrator | Saturday 06 September 2025 01:01:52 +0000 (0:00:02.267) 0:02:06.193 **** 2025-09-06 01:03:47.038150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.038163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.038183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.038207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-06 01:03:47.038220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.038238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-06 01:03:47.038250 | orchestrator | 2025-09-06 01:03:47.038262 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-06 01:03:47.038273 | orchestrator | Saturday 06 September 2025 01:01:55 +0000 (0:00:03.275) 0:02:09.469 **** 2025-09-06 01:03:47.038284 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.038296 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.038307 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.038318 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:03:47.038330 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:03:47.038631 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:03:47.038644 | orchestrator | 2025-09-06 01:03:47.038654 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-06 01:03:47.038664 | orchestrator | Saturday 06 September 2025 01:01:56 +0000 (0:00:00.797) 0:02:10.266 **** 2025-09-06 01:03:47.038674 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.038684 | orchestrator | 2025-09-06 01:03:47.038693 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-06 01:03:47.038703 | orchestrator | Saturday 06 September 2025 01:01:58 +0000 (0:00:02.437) 0:02:12.703 **** 2025-09-06 01:03:47.038713 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.038722 | orchestrator | 2025-09-06 01:03:47.038739 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-06 01:03:47.038749 | orchestrator | Saturday 06 September 2025 01:02:01 +0000 (0:00:02.454) 0:02:15.157 **** 2025-09-06 01:03:47.038759 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.038768 | orchestrator | 2025-09-06 01:03:47.038778 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.038788 | orchestrator | Saturday 06 September 2025 01:02:47 +0000 (0:00:46.053) 0:03:01.211 **** 2025-09-06 01:03:47.038797 | orchestrator | 2025-09-06 01:03:47.038807 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.038817 | orchestrator | Saturday 06 September 2025 01:02:47 +0000 (0:00:00.184) 0:03:01.396 **** 2025-09-06 01:03:47.038827 | orchestrator | 2025-09-06 01:03:47.038837 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.038847 | orchestrator | Saturday 06 September 2025 01:02:48 +0000 (0:00:00.883) 0:03:02.279 **** 2025-09-06 01:03:47.038856 | orchestrator | 2025-09-06 01:03:47.038866 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.038876 | orchestrator | Saturday 06 September 2025 01:02:48 +0000 (0:00:00.222) 0:03:02.502 **** 2025-09-06 01:03:47.038886 | orchestrator | 2025-09-06 01:03:47.038977 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.038991 | orchestrator | Saturday 06 September 2025 01:02:48 +0000 (0:00:00.230) 0:03:02.732 **** 2025-09-06 01:03:47.039001 | orchestrator | 2025-09-06 01:03:47.039011 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-06 01:03:47.039020 | orchestrator | Saturday 06 September 2025 01:02:48 +0000 (0:00:00.216) 0:03:02.949 **** 2025-09-06 01:03:47.039030 | orchestrator | 2025-09-06 01:03:47.039040 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-06 01:03:47.039049 | orchestrator | Saturday 06 September 2025 01:02:48 +0000 (0:00:00.169) 0:03:03.118 **** 2025-09-06 01:03:47.039059 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.039068 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.039078 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.039087 | orchestrator | 2025-09-06 01:03:47.039097 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-06 01:03:47.039106 | orchestrator | Saturday 06 September 2025 01:03:17 +0000 (0:00:28.853) 0:03:31.972 **** 2025-09-06 01:03:47.039116 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:03:47.039125 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:03:47.039135 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:03:47.039144 | orchestrator | 2025-09-06 01:03:47.039154 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:03:47.039163 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 01:03:47.039175 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-06 01:03:47.039184 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-06 01:03:47.039194 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 01:03:47.039204 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 01:03:47.039220 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-06 01:03:47.039230 | orchestrator | 2025-09-06 01:03:47.039240 | orchestrator | 2025-09-06 01:03:47.039250 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:03:47.039270 | orchestrator | Saturday 06 September 2025 01:03:46 +0000 (0:00:28.455) 0:04:00.427 **** 2025-09-06 01:03:47.039281 | orchestrator | =============================================================================== 2025-09-06 01:03:47.039293 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.05s 2025-09-06 01:03:47.039305 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.85s 2025-09-06 01:03:47.039317 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.46s 2025-09-06 01:03:47.039328 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.28s 2025-09-06 01:03:47.039339 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.49s 2025-09-06 01:03:47.039351 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.40s 2025-09-06 01:03:47.039363 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.95s 2025-09-06 01:03:47.039869 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.84s 2025-09-06 01:03:47.039886 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.75s 2025-09-06 01:03:47.039895 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.61s 2025-09-06 01:03:47.039903 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.57s 2025-09-06 01:03:47.039935 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2025-09-06 01:03:47.039946 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.38s 2025-09-06 01:03:47.039959 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.28s 2025-09-06 01:03:47.039972 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.23s 2025-09-06 01:03:47.039984 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.21s 2025-09-06 01:03:47.039997 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.20s 2025-09-06 01:03:47.040012 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.20s 2025-09-06 01:03:47.040026 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.00s 2025-09-06 01:03:47.040038 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.72s 2025-09-06 01:03:47.040052 | orchestrator | 2025-09-06 01:03:47 | INFO  | Task 284ff730-ce2c-4cd1-9e04-495b22e7ab1d is in state SUCCESS 2025-09-06 01:03:47.040061 | orchestrator | 2025-09-06 01:03:47.040069 | orchestrator | 2025-09-06 01:03:47.040077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:03:47.040085 | orchestrator | 2025-09-06 01:03:47.040093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:03:47.040100 | orchestrator | Saturday 06 September 2025 01:00:58 +0000 (0:00:00.418) 0:00:00.418 **** 2025-09-06 01:03:47.040187 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:03:47.040200 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:03:47.040208 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:03:47.040216 | orchestrator | 2025-09-06 01:03:47.040224 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:03:47.040232 | orchestrator | Saturday 06 September 2025 01:00:58 +0000 (0:00:00.485) 0:00:00.903 **** 2025-09-06 01:03:47.040240 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-06 01:03:47.040248 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-06 01:03:47.040256 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-06 01:03:47.040264 | orchestrator | 2025-09-06 01:03:47.040272 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-06 01:03:47.040279 | orchestrator | 2025-09-06 01:03:47.040287 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-06 01:03:47.040295 | orchestrator | Saturday 06 September 2025 01:00:59 +0000 (0:00:00.952) 0:00:01.855 **** 2025-09-06 01:03:47.040313 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:03:47.040322 | orchestrator | 2025-09-06 01:03:47.040330 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-06 01:03:47.040337 | orchestrator | Saturday 06 September 2025 01:01:00 +0000 (0:00:00.674) 0:00:02.530 **** 2025-09-06 01:03:47.040345 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-06 01:03:47.040353 | orchestrator | 2025-09-06 01:03:47.040361 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-06 01:03:47.040369 | orchestrator | Saturday 06 September 2025 01:01:03 +0000 (0:00:03.245) 0:00:05.776 **** 2025-09-06 01:03:47.040377 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-06 01:03:47.040385 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-06 01:03:47.040393 | orchestrator | 2025-09-06 01:03:47.040400 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-06 01:03:47.040409 | orchestrator | Saturday 06 September 2025 01:01:09 +0000 (0:00:05.762) 0:00:11.538 **** 2025-09-06 01:03:47.040416 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:03:47.040424 | orchestrator | 2025-09-06 01:03:47.040432 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-06 01:03:47.040440 | orchestrator | Saturday 06 September 2025 01:01:12 +0000 (0:00:03.547) 0:00:15.086 **** 2025-09-06 01:03:47.040448 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:03:47.040462 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-06 01:03:47.040470 | orchestrator | 2025-09-06 01:03:47.040478 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-06 01:03:47.040486 | orchestrator | Saturday 06 September 2025 01:01:16 +0000 (0:00:04.073) 0:00:19.160 **** 2025-09-06 01:03:47.040494 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:03:47.040502 | orchestrator | 2025-09-06 01:03:47.040509 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-06 01:03:47.040517 | orchestrator | Saturday 06 September 2025 01:01:20 +0000 (0:00:03.440) 0:00:22.600 **** 2025-09-06 01:03:47.040525 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-06 01:03:47.040533 | orchestrator | 2025-09-06 01:03:47.040541 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-06 01:03:47.040548 | orchestrator | Saturday 06 September 2025 01:01:24 +0000 (0:00:04.337) 0:00:26.937 **** 2025-09-06 01:03:47.040557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.040595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.040611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.040620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.040819 | orchestrator | 2025-09-06 01:03:47.040829 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-06 01:03:47.040838 | orchestrator | Saturday 06 September 2025 01:01:27 +0000 (0:00:03.121) 0:00:30.059 **** 2025-09-06 01:03:47.040848 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.040857 | orchestrator | 2025-09-06 01:03:47.040867 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-06 01:03:47.040876 | orchestrator | Saturday 06 September 2025 01:01:27 +0000 (0:00:00.111) 0:00:30.170 **** 2025-09-06 01:03:47.040886 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.040894 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.040904 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.040929 | orchestrator | 2025-09-06 01:03:47.040939 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-06 01:03:47.040949 | orchestrator | Saturday 06 September 2025 01:01:28 +0000 (0:00:00.245) 0:00:30.416 **** 2025-09-06 01:03:47.040958 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:03:47.040967 | orchestrator | 2025-09-06 01:03:47.040976 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-06 01:03:47.040987 | orchestrator | Saturday 06 September 2025 01:01:28 +0000 (0:00:00.584) 0:00:31.001 **** 2025-09-06 01:03:47.040996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.041271 | orchestrator | 2025-09-06 01:03:47.041279 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-06 01:03:47.041287 | orchestrator | Saturday 06 September 2025 01:01:35 +0000 (0:00:07.265) 0:00:38.266 **** 2025-09-06 01:03:47.041301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041447 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.041459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041472 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.041480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041559 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.041567 | orchestrator | 2025-09-06 01:03:47.041578 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-06 01:03:47.041586 | orchestrator | Saturday 06 September 2025 01:01:36 +0000 (0:00:00.818) 0:00:39.085 **** 2025-09-06 01:03:47.041595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041672 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.041684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041761 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.041773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.041782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.041790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.041853 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.041861 | orchestrator | 2025-09-06 01:03:47.041868 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-06 01:03:47.041877 | orchestrator | Saturday 06 September 2025 01:01:38 +0000 (0:00:01.939) 0:00:41.024 **** 2025-09-06 01:03:47.041889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.041986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042215 | orchestrator | 2025-09-06 01:03:47.042224 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-06 01:03:47.042232 | orchestrator | Saturday 06 September 2025 01:01:45 +0000 (0:00:07.060) 0:00:48.085 **** 2025-09-06 01:03:47.042244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.042252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.042260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.042274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042431 | orchestrator | 2025-09-06 01:03:47.042440 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-06 01:03:47.042448 | orchestrator | Saturday 06 September 2025 01:02:03 +0000 (0:00:17.943) 0:01:06.029 **** 2025-09-06 01:03:47.042455 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-06 01:03:47.042463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-06 01:03:47.042471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-06 01:03:47.042478 | orchestrator | 2025-09-06 01:03:47.042485 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-06 01:03:47.042492 | orchestrator | Saturday 06 September 2025 01:02:08 +0000 (0:00:05.123) 0:01:11.153 **** 2025-09-06 01:03:47.042498 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-06 01:03:47.042505 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-06 01:03:47.042512 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-06 01:03:47.042518 | orchestrator | 2025-09-06 01:03:47.042525 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-06 01:03:47.042532 | orchestrator | Saturday 06 September 2025 01:02:11 +0000 (0:00:02.315) 0:01:13.468 **** 2025-09-06 01:03:47.042542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042734 | orchestrator | 2025-09-06 01:03:47.042741 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-06 01:03:47.042747 | orchestrator | Saturday 06 September 2025 01:02:13 +0000 (0:00:02.502) 0:01:15.971 **** 2025-09-06 01:03:47.042758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.042787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.042891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.042929 | orchestrator | 2025-09-06 01:03:47.042936 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-06 01:03:47.042943 | orchestrator | Saturday 06 September 2025 01:02:15 +0000 (0:00:02.336) 0:01:18.308 **** 2025-09-06 01:03:47.042950 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.042957 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.042963 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.042970 | orchestrator | 2025-09-06 01:03:47.042976 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-06 01:03:47.042983 | orchestrator | Saturday 06 September 2025 01:02:16 +0000 (0:00:00.277) 0:01:18.585 **** 2025-09-06 01:03:47.042990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.043000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.043012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043044 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.043051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.043061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.043073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043105 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.043112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-06 01:03:47.043121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-06 01:03:47.043135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:03:47.043166 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.043173 | orchestrator | 2025-09-06 01:03:47.043180 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-06 01:03:47.043186 | orchestrator | Saturday 06 September 2025 01:02:17 +0000 (0:00:00.936) 0:01:19.522 **** 2025-09-06 01:03:47.043193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.043207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.043215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-06 01:03:47.043222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:03:47.043350 | orchestrator | 2025-09-06 01:03:47.043357 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-06 01:03:47.043367 | orchestrator | Saturday 06 September 2025 01:02:21 +0000 (0:00:04.469) 0:01:23.991 **** 2025-09-06 01:03:47.043374 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:03:47.043381 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:03:47.043387 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:03:47.043394 | orchestrator | 2025-09-06 01:03:47.043401 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-06 01:03:47.043407 | orchestrator | Saturday 06 September 2025 01:02:21 +0000 (0:00:00.283) 0:01:24.274 **** 2025-09-06 01:03:47.043414 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-06 01:03:47.043421 | orchestrator | 2025-09-06 01:03:47.043427 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-06 01:03:47.043439 | orchestrator | Saturday 06 September 2025 01:02:24 +0000 (0:00:02.348) 0:01:26.622 **** 2025-09-06 01:03:47.043446 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 01:03:47.043452 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-06 01:03:47.043459 | orchestrator | 2025-09-06 01:03:47.043466 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-06 01:03:47.043472 | orchestrator | Saturday 06 September 2025 01:02:26 +0000 (0:00:02.322) 0:01:28.944 **** 2025-09-06 01:03:47.043479 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043486 | orchestrator | 2025-09-06 01:03:47.043492 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-06 01:03:47.043499 | orchestrator | Saturday 06 September 2025 01:02:45 +0000 (0:00:18.658) 0:01:47.603 **** 2025-09-06 01:03:47.043506 | orchestrator | 2025-09-06 01:03:47.043512 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-06 01:03:47.043519 | orchestrator | Saturday 06 September 2025 01:02:45 +0000 (0:00:00.286) 0:01:47.889 **** 2025-09-06 01:03:47.043526 | orchestrator | 2025-09-06 01:03:47.043532 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-06 01:03:47.043539 | orchestrator | Saturday 06 September 2025 01:02:45 +0000 (0:00:00.064) 0:01:47.953 **** 2025-09-06 01:03:47.043545 | orchestrator | 2025-09-06 01:03:47.043552 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-06 01:03:47.043559 | orchestrator | Saturday 06 September 2025 01:02:45 +0000 (0:00:00.066) 0:01:48.020 **** 2025-09-06 01:03:47.043565 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043572 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043579 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043585 | orchestrator | 2025-09-06 01:03:47.043592 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-06 01:03:47.043599 | orchestrator | Saturday 06 September 2025 01:02:56 +0000 (0:00:10.442) 0:01:58.463 **** 2025-09-06 01:03:47.043605 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043612 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043622 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043628 | orchestrator | 2025-09-06 01:03:47.043635 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-06 01:03:47.043642 | orchestrator | Saturday 06 September 2025 01:03:07 +0000 (0:00:11.234) 0:02:09.697 **** 2025-09-06 01:03:47.043648 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043655 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043661 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043668 | orchestrator | 2025-09-06 01:03:47.043674 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-06 01:03:47.043681 | orchestrator | Saturday 06 September 2025 01:03:14 +0000 (0:00:06.722) 0:02:16.419 **** 2025-09-06 01:03:47.043688 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043694 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043701 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043707 | orchestrator | 2025-09-06 01:03:47.043714 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-06 01:03:47.043721 | orchestrator | Saturday 06 September 2025 01:03:19 +0000 (0:00:05.665) 0:02:22.085 **** 2025-09-06 01:03:47.043728 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043734 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043741 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043747 | orchestrator | 2025-09-06 01:03:47.043754 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-06 01:03:47.043761 | orchestrator | Saturday 06 September 2025 01:03:31 +0000 (0:00:12.057) 0:02:34.142 **** 2025-09-06 01:03:47.043767 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043774 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:03:47.043780 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:03:47.043787 | orchestrator | 2025-09-06 01:03:47.043794 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-06 01:03:47.043805 | orchestrator | Saturday 06 September 2025 01:03:37 +0000 (0:00:05.704) 0:02:39.847 **** 2025-09-06 01:03:47.043811 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:03:47.043818 | orchestrator | 2025-09-06 01:03:47.043825 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:03:47.043831 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:03:47.043838 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:03:47.043845 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:03:47.043852 | orchestrator | 2025-09-06 01:03:47.043858 | orchestrator | 2025-09-06 01:03:47.043865 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:03:47.043872 | orchestrator | Saturday 06 September 2025 01:03:45 +0000 (0:00:07.700) 0:02:47.548 **** 2025-09-06 01:03:47.043878 | orchestrator | =============================================================================== 2025-09-06 01:03:47.043885 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.66s 2025-09-06 01:03:47.043894 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.94s 2025-09-06 01:03:47.043901 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.06s 2025-09-06 01:03:47.043908 | orchestrator | designate : Restart designate-api container ---------------------------- 11.23s 2025-09-06 01:03:47.043927 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.44s 2025-09-06 01:03:47.043933 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.70s 2025-09-06 01:03:47.043940 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.27s 2025-09-06 01:03:47.043947 | orchestrator | designate : Copying over config.json files for services ----------------- 7.06s 2025-09-06 01:03:47.043953 | orchestrator | designate : Restart designate-central container ------------------------- 6.72s 2025-09-06 01:03:47.043960 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.76s 2025-09-06 01:03:47.043967 | orchestrator | designate : Restart designate-worker container -------------------------- 5.70s 2025-09-06 01:03:47.043973 | orchestrator | designate : Restart designate-producer container ------------------------ 5.67s 2025-09-06 01:03:47.043980 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.12s 2025-09-06 01:03:47.043987 | orchestrator | designate : Check designate containers ---------------------------------- 4.47s 2025-09-06 01:03:47.043993 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.34s 2025-09-06 01:03:47.044000 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.07s 2025-09-06 01:03:47.044007 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.55s 2025-09-06 01:03:47.044013 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.44s 2025-09-06 01:03:47.044020 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.25s 2025-09-06 01:03:47.044026 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.12s 2025-09-06 01:03:47.044033 | orchestrator | 2025-09-06 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:50.080704 | orchestrator | 2025-09-06 01:03:50 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:50.081677 | orchestrator | 2025-09-06 01:03:50 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:03:50.085257 | orchestrator | 2025-09-06 01:03:50 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:50.085600 | orchestrator | 2025-09-06 01:03:50 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:03:50.085671 | orchestrator | 2025-09-06 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:53.129595 | orchestrator | 2025-09-06 01:03:53 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:53.130296 | orchestrator | 2025-09-06 01:03:53 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:03:53.131275 | orchestrator | 2025-09-06 01:03:53 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:53.132292 | orchestrator | 2025-09-06 01:03:53 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:03:53.132314 | orchestrator | 2025-09-06 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:56.179792 | orchestrator | 2025-09-06 01:03:56 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:56.180147 | orchestrator | 2025-09-06 01:03:56 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:03:56.181177 | orchestrator | 2025-09-06 01:03:56 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:56.181999 | orchestrator | 2025-09-06 01:03:56 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:03:56.182065 | orchestrator | 2025-09-06 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:03:59.226226 | orchestrator | 2025-09-06 01:03:59 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:03:59.228811 | orchestrator | 2025-09-06 01:03:59 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:03:59.230737 | orchestrator | 2025-09-06 01:03:59 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:03:59.233141 | orchestrator | 2025-09-06 01:03:59 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:03:59.233401 | orchestrator | 2025-09-06 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:02.286655 | orchestrator | 2025-09-06 01:04:02 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:02.287351 | orchestrator | 2025-09-06 01:04:02 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:02.288765 | orchestrator | 2025-09-06 01:04:02 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:02.290395 | orchestrator | 2025-09-06 01:04:02 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:02.290622 | orchestrator | 2025-09-06 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:05.344173 | orchestrator | 2025-09-06 01:04:05 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:05.348834 | orchestrator | 2025-09-06 01:04:05 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:05.351961 | orchestrator | 2025-09-06 01:04:05 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:05.354511 | orchestrator | 2025-09-06 01:04:05 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:05.355091 | orchestrator | 2025-09-06 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:08.393007 | orchestrator | 2025-09-06 01:04:08 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:08.393947 | orchestrator | 2025-09-06 01:04:08 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:08.394593 | orchestrator | 2025-09-06 01:04:08 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:08.396865 | orchestrator | 2025-09-06 01:04:08 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:08.396912 | orchestrator | 2025-09-06 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:11.440379 | orchestrator | 2025-09-06 01:04:11 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:11.446325 | orchestrator | 2025-09-06 01:04:11 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:11.451387 | orchestrator | 2025-09-06 01:04:11 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:11.453118 | orchestrator | 2025-09-06 01:04:11 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:11.453271 | orchestrator | 2025-09-06 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:14.493828 | orchestrator | 2025-09-06 01:04:14 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:14.494832 | orchestrator | 2025-09-06 01:04:14 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:14.496591 | orchestrator | 2025-09-06 01:04:14 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:14.498324 | orchestrator | 2025-09-06 01:04:14 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:14.498356 | orchestrator | 2025-09-06 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:17.552189 | orchestrator | 2025-09-06 01:04:17 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:17.553975 | orchestrator | 2025-09-06 01:04:17 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:17.555601 | orchestrator | 2025-09-06 01:04:17 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:17.557192 | orchestrator | 2025-09-06 01:04:17 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:17.557446 | orchestrator | 2025-09-06 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:20.609481 | orchestrator | 2025-09-06 01:04:20 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:20.610481 | orchestrator | 2025-09-06 01:04:20 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:20.611919 | orchestrator | 2025-09-06 01:04:20 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:20.613447 | orchestrator | 2025-09-06 01:04:20 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:20.613754 | orchestrator | 2025-09-06 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:23.673460 | orchestrator | 2025-09-06 01:04:23 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:23.673567 | orchestrator | 2025-09-06 01:04:23 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:23.673582 | orchestrator | 2025-09-06 01:04:23 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:23.673805 | orchestrator | 2025-09-06 01:04:23 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:23.674216 | orchestrator | 2025-09-06 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:26.731035 | orchestrator | 2025-09-06 01:04:26 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:26.734389 | orchestrator | 2025-09-06 01:04:26 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:26.736085 | orchestrator | 2025-09-06 01:04:26 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:26.738187 | orchestrator | 2025-09-06 01:04:26 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:26.738213 | orchestrator | 2025-09-06 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:29.778100 | orchestrator | 2025-09-06 01:04:29 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:29.778318 | orchestrator | 2025-09-06 01:04:29 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:29.779283 | orchestrator | 2025-09-06 01:04:29 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:29.780775 | orchestrator | 2025-09-06 01:04:29 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:29.781644 | orchestrator | 2025-09-06 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:32.829566 | orchestrator | 2025-09-06 01:04:32 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:32.830983 | orchestrator | 2025-09-06 01:04:32 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:32.832336 | orchestrator | 2025-09-06 01:04:32 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:32.834271 | orchestrator | 2025-09-06 01:04:32 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:32.834300 | orchestrator | 2025-09-06 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:35.879636 | orchestrator | 2025-09-06 01:04:35 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:35.880643 | orchestrator | 2025-09-06 01:04:35 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:35.881317 | orchestrator | 2025-09-06 01:04:35 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:35.882150 | orchestrator | 2025-09-06 01:04:35 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:35.882174 | orchestrator | 2025-09-06 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:38.928936 | orchestrator | 2025-09-06 01:04:38 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:38.930721 | orchestrator | 2025-09-06 01:04:38 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:38.931941 | orchestrator | 2025-09-06 01:04:38 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:38.933603 | orchestrator | 2025-09-06 01:04:38 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:38.933697 | orchestrator | 2025-09-06 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:41.990003 | orchestrator | 2025-09-06 01:04:41 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:41.992160 | orchestrator | 2025-09-06 01:04:41 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:41.994500 | orchestrator | 2025-09-06 01:04:41 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:41.996420 | orchestrator | 2025-09-06 01:04:42 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:41.996910 | orchestrator | 2025-09-06 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:45.031503 | orchestrator | 2025-09-06 01:04:45 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:45.032311 | orchestrator | 2025-09-06 01:04:45 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:45.035608 | orchestrator | 2025-09-06 01:04:45 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:45.037225 | orchestrator | 2025-09-06 01:04:45 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:45.037265 | orchestrator | 2025-09-06 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:48.073865 | orchestrator | 2025-09-06 01:04:48 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:48.074112 | orchestrator | 2025-09-06 01:04:48 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:48.075725 | orchestrator | 2025-09-06 01:04:48 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:48.076407 | orchestrator | 2025-09-06 01:04:48 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:48.076432 | orchestrator | 2025-09-06 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:51.107259 | orchestrator | 2025-09-06 01:04:51 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:51.108683 | orchestrator | 2025-09-06 01:04:51 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:51.110908 | orchestrator | 2025-09-06 01:04:51 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:51.113312 | orchestrator | 2025-09-06 01:04:51 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:51.113386 | orchestrator | 2025-09-06 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:54.149376 | orchestrator | 2025-09-06 01:04:54 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:54.151038 | orchestrator | 2025-09-06 01:04:54 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state STARTED 2025-09-06 01:04:54.152479 | orchestrator | 2025-09-06 01:04:54 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:54.154078 | orchestrator | 2025-09-06 01:04:54 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:54.154729 | orchestrator | 2025-09-06 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:04:57.206997 | orchestrator | 2025-09-06 01:04:57 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:04:57.209157 | orchestrator | 2025-09-06 01:04:57 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:04:57.213640 | orchestrator | 2025-09-06 01:04:57.213693 | orchestrator | 2025-09-06 01:04:57.213706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:04:57.213718 | orchestrator | 2025-09-06 01:04:57.213729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:04:57.213741 | orchestrator | Saturday 06 September 2025 01:03:49 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-06 01:04:57.213752 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:04:57.213764 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:04:57.213775 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:04:57.213833 | orchestrator | 2025-09-06 01:04:57.213952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:04:57.213969 | orchestrator | Saturday 06 September 2025 01:03:49 +0000 (0:00:00.319) 0:00:00.595 **** 2025-09-06 01:04:57.213980 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-06 01:04:57.213991 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-06 01:04:57.214002 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-06 01:04:57.214013 | orchestrator | 2025-09-06 01:04:57.214096 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-06 01:04:57.214116 | orchestrator | 2025-09-06 01:04:57.214137 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-06 01:04:57.214157 | orchestrator | Saturday 06 September 2025 01:03:50 +0000 (0:00:00.421) 0:00:01.016 **** 2025-09-06 01:04:57.214175 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:04:57.214189 | orchestrator | 2025-09-06 01:04:57.214200 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-06 01:04:57.214211 | orchestrator | Saturday 06 September 2025 01:03:50 +0000 (0:00:00.527) 0:00:01.544 **** 2025-09-06 01:04:57.214222 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-06 01:04:57.214232 | orchestrator | 2025-09-06 01:04:57.214243 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-06 01:04:57.214254 | orchestrator | Saturday 06 September 2025 01:03:54 +0000 (0:00:03.719) 0:00:05.263 **** 2025-09-06 01:04:57.214264 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-06 01:04:57.214276 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-06 01:04:57.214286 | orchestrator | 2025-09-06 01:04:57.214297 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-06 01:04:57.214308 | orchestrator | Saturday 06 September 2025 01:04:01 +0000 (0:00:06.835) 0:00:12.099 **** 2025-09-06 01:04:57.214319 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:04:57.214329 | orchestrator | 2025-09-06 01:04:57.214340 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-06 01:04:57.214351 | orchestrator | Saturday 06 September 2025 01:04:04 +0000 (0:00:03.393) 0:00:15.493 **** 2025-09-06 01:04:57.214362 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:04:57.214372 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-06 01:04:57.214383 | orchestrator | 2025-09-06 01:04:57.214393 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-06 01:04:57.214404 | orchestrator | Saturday 06 September 2025 01:04:09 +0000 (0:00:04.277) 0:00:19.770 **** 2025-09-06 01:04:57.214415 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:04:57.214425 | orchestrator | 2025-09-06 01:04:57.214436 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-06 01:04:57.214446 | orchestrator | Saturday 06 September 2025 01:04:12 +0000 (0:00:03.326) 0:00:23.096 **** 2025-09-06 01:04:57.214457 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-06 01:04:57.214467 | orchestrator | 2025-09-06 01:04:57.214478 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-06 01:04:57.214489 | orchestrator | Saturday 06 September 2025 01:04:16 +0000 (0:00:04.061) 0:00:27.158 **** 2025-09-06 01:04:57.214499 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.214510 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:04:57.214521 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:04:57.214531 | orchestrator | 2025-09-06 01:04:57.214542 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-06 01:04:57.214552 | orchestrator | Saturday 06 September 2025 01:04:16 +0000 (0:00:00.277) 0:00:27.435 **** 2025-09-06 01:04:57.214581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214656 | orchestrator | 2025-09-06 01:04:57.214669 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-06 01:04:57.214682 | orchestrator | Saturday 06 September 2025 01:04:17 +0000 (0:00:00.812) 0:00:28.248 **** 2025-09-06 01:04:57.214695 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.214708 | orchestrator | 2025-09-06 01:04:57.214720 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-06 01:04:57.214733 | orchestrator | Saturday 06 September 2025 01:04:17 +0000 (0:00:00.122) 0:00:28.371 **** 2025-09-06 01:04:57.214744 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.214757 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:04:57.214769 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:04:57.214809 | orchestrator | 2025-09-06 01:04:57.214823 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-06 01:04:57.214835 | orchestrator | Saturday 06 September 2025 01:04:18 +0000 (0:00:00.474) 0:00:28.845 **** 2025-09-06 01:04:57.214848 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:04:57.214860 | orchestrator | 2025-09-06 01:04:57.214873 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-06 01:04:57.214885 | orchestrator | Saturday 06 September 2025 01:04:18 +0000 (0:00:00.527) 0:00:29.373 **** 2025-09-06 01:04:57.214913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.214974 | orchestrator | 2025-09-06 01:04:57.214985 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-06 01:04:57.214996 | orchestrator | Saturday 06 September 2025 01:04:20 +0000 (0:00:01.431) 0:00:30.804 **** 2025-09-06 01:04:57.215007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215137 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.215176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215197 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:04:57.215231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215244 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:04:57.215255 | orchestrator | 2025-09-06 01:04:57.215266 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-06 01:04:57.215276 | orchestrator | Saturday 06 September 2025 01:04:20 +0000 (0:00:00.852) 0:00:31.656 **** 2025-09-06 01:04:57.215287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215299 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.215310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215328 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:04:57.215409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215422 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:04:57.215432 | orchestrator | 2025-09-06 01:04:57.215443 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-06 01:04:57.215454 | orchestrator | Saturday 06 September 2025 01:04:21 +0000 (0:00:00.706) 0:00:32.362 **** 2025-09-06 01:04:57.215477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215520 | orchestrator | 2025-09-06 01:04:57.215531 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-06 01:04:57.215542 | orchestrator | Saturday 06 September 2025 01:04:23 +0000 (0:00:01.377) 0:00:33.739 **** 2025-09-06 01:04:57.215553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215601 | orchestrator | 2025-09-06 01:04:57.215612 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-06 01:04:57.215624 | orchestrator | Saturday 06 September 2025 01:04:25 +0000 (0:00:02.517) 0:00:36.257 **** 2025-09-06 01:04:57.215635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-06 01:04:57.215646 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-06 01:04:57.215657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-06 01:04:57.215668 | orchestrator | 2025-09-06 01:04:57.215679 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-06 01:04:57.215690 | orchestrator | Saturday 06 September 2025 01:04:27 +0000 (0:00:01.691) 0:00:37.949 **** 2025-09-06 01:04:57.215701 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:04:57.215717 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:04:57.215729 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:04:57.215739 | orchestrator | 2025-09-06 01:04:57.215750 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-06 01:04:57.215761 | orchestrator | Saturday 06 September 2025 01:04:28 +0000 (0:00:01.337) 0:00:39.286 **** 2025-09-06 01:04:57.215772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215826 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:04:57.215839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215850 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:04:57.215882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-06 01:04:57.215895 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:04:57.215905 | orchestrator | 2025-09-06 01:04:57.215916 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-06 01:04:57.215927 | orchestrator | Saturday 06 September 2025 01:04:29 +0000 (0:00:00.525) 0:00:39.812 **** 2025-09-06 01:04:57.215939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-06 01:04:57.215980 | orchestrator | 2025-09-06 01:04:57.215991 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-06 01:04:57.216002 | orchestrator | Saturday 06 September 2025 01:04:30 +0000 (0:00:01.421) 0:00:41.233 **** 2025-09-06 01:04:57.216013 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:04:57.216024 | orchestrator | 2025-09-06 01:04:57.216034 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-06 01:04:57.216045 | orchestrator | Saturday 06 September 2025 01:04:33 +0000 (0:00:02.674) 0:00:43.907 **** 2025-09-06 01:04:57.216056 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:04:57.216067 | orchestrator | 2025-09-06 01:04:57.216078 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-06 01:04:57.216088 | orchestrator | Saturday 06 September 2025 01:04:35 +0000 (0:00:02.437) 0:00:46.345 **** 2025-09-06 01:04:57.216099 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:04:57.216110 | orchestrator | 2025-09-06 01:04:57.216125 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-06 01:04:57.216138 | orchestrator | Saturday 06 September 2025 01:04:49 +0000 (0:00:14.348) 0:01:00.693 **** 2025-09-06 01:04:57.216158 | orchestrator | 2025-09-06 01:04:57.216178 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-06 01:04:57.216198 | orchestrator | Saturday 06 September 2025 01:04:50 +0000 (0:00:00.104) 0:01:00.797 **** 2025-09-06 01:04:57.216218 | orchestrator | 2025-09-06 01:04:57.216248 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-06 01:04:57.216264 | orchestrator | Saturday 06 September 2025 01:04:50 +0000 (0:00:00.068) 0:01:00.866 **** 2025-09-06 01:04:57.216275 | orchestrator | 2025-09-06 01:04:57.216286 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-06 01:04:57.216304 | orchestrator | Saturday 06 September 2025 01:04:50 +0000 (0:00:00.062) 0:01:00.928 **** 2025-09-06 01:04:57.216315 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:04:57.216326 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:04:57.216336 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:04:57.216347 | orchestrator | 2025-09-06 01:04:57.216358 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:04:57.216370 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:04:57.216381 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:04:57.216392 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:04:57.216403 | orchestrator | 2025-09-06 01:04:57.216413 | orchestrator | 2025-09-06 01:04:57.216424 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:04:57.216435 | orchestrator | Saturday 06 September 2025 01:04:55 +0000 (0:00:05.531) 0:01:06.460 **** 2025-09-06 01:04:57.216446 | orchestrator | =============================================================================== 2025-09-06 01:04:57.216456 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.35s 2025-09-06 01:04:57.216467 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.84s 2025-09-06 01:04:57.216477 | orchestrator | placement : Restart placement-api container ----------------------------- 5.53s 2025-09-06 01:04:57.216488 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.28s 2025-09-06 01:04:57.216498 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.06s 2025-09-06 01:04:57.216509 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.72s 2025-09-06 01:04:57.216519 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.39s 2025-09-06 01:04:57.216530 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.33s 2025-09-06 01:04:57.216541 | orchestrator | placement : Creating placement databases -------------------------------- 2.67s 2025-09-06 01:04:57.216551 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.52s 2025-09-06 01:04:57.216562 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.44s 2025-09-06 01:04:57.216572 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.69s 2025-09-06 01:04:57.216583 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.43s 2025-09-06 01:04:57.216594 | orchestrator | placement : Check placement containers ---------------------------------- 1.42s 2025-09-06 01:04:57.216604 | orchestrator | placement : Copying over config.json files for services ----------------- 1.38s 2025-09-06 01:04:57.216615 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.34s 2025-09-06 01:04:57.216625 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.85s 2025-09-06 01:04:57.216636 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.81s 2025-09-06 01:04:57.216647 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-09-06 01:04:57.216658 | orchestrator | placement : include_tasks ----------------------------------------------- 0.53s 2025-09-06 01:04:57.216668 | orchestrator | 2025-09-06 01:04:57 | INFO  | Task c38035ff-ed5a-4d7a-90d1-0cee5d9647c8 is in state SUCCESS 2025-09-06 01:04:57.216680 | orchestrator | 2025-09-06 01:04:57 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:04:57.216690 | orchestrator | 2025-09-06 01:04:57 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:04:57.216707 | orchestrator | 2025-09-06 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:00.257857 | orchestrator | 2025-09-06 01:05:00 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:00.257965 | orchestrator | 2025-09-06 01:05:00 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:00.258640 | orchestrator | 2025-09-06 01:05:00 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:00.259643 | orchestrator | 2025-09-06 01:05:00 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:00.259687 | orchestrator | 2025-09-06 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:03.302967 | orchestrator | 2025-09-06 01:05:03 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:03.304750 | orchestrator | 2025-09-06 01:05:03 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:03.307272 | orchestrator | 2025-09-06 01:05:03 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:03.309390 | orchestrator | 2025-09-06 01:05:03 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:03.309417 | orchestrator | 2025-09-06 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:06.355898 | orchestrator | 2025-09-06 01:05:06 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:06.358841 | orchestrator | 2025-09-06 01:05:06 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:06.361619 | orchestrator | 2025-09-06 01:05:06 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:06.363839 | orchestrator | 2025-09-06 01:05:06 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:06.364179 | orchestrator | 2025-09-06 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:09.412541 | orchestrator | 2025-09-06 01:05:09 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:09.414056 | orchestrator | 2025-09-06 01:05:09 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:09.415738 | orchestrator | 2025-09-06 01:05:09 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:09.416879 | orchestrator | 2025-09-06 01:05:09 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:09.417591 | orchestrator | 2025-09-06 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:12.458311 | orchestrator | 2025-09-06 01:05:12 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:12.458682 | orchestrator | 2025-09-06 01:05:12 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:12.459601 | orchestrator | 2025-09-06 01:05:12 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:12.461366 | orchestrator | 2025-09-06 01:05:12 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:12.461388 | orchestrator | 2025-09-06 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:15.502209 | orchestrator | 2025-09-06 01:05:15 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:15.502720 | orchestrator | 2025-09-06 01:05:15 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:15.503372 | orchestrator | 2025-09-06 01:05:15 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:15.504370 | orchestrator | 2025-09-06 01:05:15 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:15.504393 | orchestrator | 2025-09-06 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:18.542899 | orchestrator | 2025-09-06 01:05:18 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:18.543937 | orchestrator | 2025-09-06 01:05:18 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:18.545104 | orchestrator | 2025-09-06 01:05:18 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:18.547526 | orchestrator | 2025-09-06 01:05:18 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:18.547938 | orchestrator | 2025-09-06 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:21.586382 | orchestrator | 2025-09-06 01:05:21 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:21.587974 | orchestrator | 2025-09-06 01:05:21 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:21.590004 | orchestrator | 2025-09-06 01:05:21 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:21.591815 | orchestrator | 2025-09-06 01:05:21 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:21.591947 | orchestrator | 2025-09-06 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:24.632528 | orchestrator | 2025-09-06 01:05:24 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:24.634286 | orchestrator | 2025-09-06 01:05:24 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:24.635953 | orchestrator | 2025-09-06 01:05:24 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:24.637422 | orchestrator | 2025-09-06 01:05:24 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:24.637447 | orchestrator | 2025-09-06 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:27.681440 | orchestrator | 2025-09-06 01:05:27 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:27.682886 | orchestrator | 2025-09-06 01:05:27 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:27.684987 | orchestrator | 2025-09-06 01:05:27 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:27.686956 | orchestrator | 2025-09-06 01:05:27 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:27.687442 | orchestrator | 2025-09-06 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:30.724641 | orchestrator | 2025-09-06 01:05:30 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:30.727044 | orchestrator | 2025-09-06 01:05:30 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:30.730161 | orchestrator | 2025-09-06 01:05:30 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:30.731302 | orchestrator | 2025-09-06 01:05:30 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:30.731549 | orchestrator | 2025-09-06 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:33.777995 | orchestrator | 2025-09-06 01:05:33 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:33.778891 | orchestrator | 2025-09-06 01:05:33 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:33.779861 | orchestrator | 2025-09-06 01:05:33 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:33.781090 | orchestrator | 2025-09-06 01:05:33 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:33.781133 | orchestrator | 2025-09-06 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:36.814063 | orchestrator | 2025-09-06 01:05:36 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:36.814590 | orchestrator | 2025-09-06 01:05:36 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:36.815511 | orchestrator | 2025-09-06 01:05:36 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:36.816229 | orchestrator | 2025-09-06 01:05:36 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state STARTED 2025-09-06 01:05:36.816322 | orchestrator | 2025-09-06 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:39.844533 | orchestrator | 2025-09-06 01:05:39 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:39.844811 | orchestrator | 2025-09-06 01:05:39 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:39.844847 | orchestrator | 2025-09-06 01:05:39 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:39.845912 | orchestrator | 2025-09-06 01:05:39 | INFO  | Task 1f9aa92d-2c79-4989-8d29-f58fdc001e06 is in state SUCCESS 2025-09-06 01:05:39.845938 | orchestrator | 2025-09-06 01:05:39.847511 | orchestrator | 2025-09-06 01:05:39.847545 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:05:39.847557 | orchestrator | 2025-09-06 01:05:39.847568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:05:39.847580 | orchestrator | Saturday 06 September 2025 01:03:50 +0000 (0:00:00.264) 0:00:00.264 **** 2025-09-06 01:05:39.847591 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:05:39.847603 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:05:39.847683 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:05:39.847917 | orchestrator | 2025-09-06 01:05:39.847930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:05:39.847941 | orchestrator | Saturday 06 September 2025 01:03:51 +0000 (0:00:00.315) 0:00:00.579 **** 2025-09-06 01:05:39.847952 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-06 01:05:39.847963 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-06 01:05:39.847974 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-06 01:05:39.847985 | orchestrator | 2025-09-06 01:05:39.848008 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-06 01:05:39.848020 | orchestrator | 2025-09-06 01:05:39.848031 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-06 01:05:39.848041 | orchestrator | Saturday 06 September 2025 01:03:51 +0000 (0:00:00.410) 0:00:00.990 **** 2025-09-06 01:05:39.848052 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:05:39.848064 | orchestrator | 2025-09-06 01:05:39.848075 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-06 01:05:39.848086 | orchestrator | Saturday 06 September 2025 01:03:51 +0000 (0:00:00.527) 0:00:01.517 **** 2025-09-06 01:05:39.848098 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-06 01:05:39.848108 | orchestrator | 2025-09-06 01:05:39.848119 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-06 01:05:39.848130 | orchestrator | Saturday 06 September 2025 01:03:55 +0000 (0:00:03.673) 0:00:05.190 **** 2025-09-06 01:05:39.848161 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-06 01:05:39.848173 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-06 01:05:39.848184 | orchestrator | 2025-09-06 01:05:39.848195 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-06 01:05:39.848206 | orchestrator | Saturday 06 September 2025 01:04:02 +0000 (0:00:06.793) 0:00:11.984 **** 2025-09-06 01:05:39.848217 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:05:39.848228 | orchestrator | 2025-09-06 01:05:39.848239 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-06 01:05:39.848249 | orchestrator | Saturday 06 September 2025 01:04:06 +0000 (0:00:03.566) 0:00:15.550 **** 2025-09-06 01:05:39.848260 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:05:39.848271 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-06 01:05:39.848282 | orchestrator | 2025-09-06 01:05:39.848293 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-06 01:05:39.848303 | orchestrator | Saturday 06 September 2025 01:04:10 +0000 (0:00:04.055) 0:00:19.605 **** 2025-09-06 01:05:39.848314 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:05:39.848325 | orchestrator | 2025-09-06 01:05:39.848336 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-06 01:05:39.848347 | orchestrator | Saturday 06 September 2025 01:04:13 +0000 (0:00:03.408) 0:00:23.013 **** 2025-09-06 01:05:39.848357 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-06 01:05:39.848368 | orchestrator | 2025-09-06 01:05:39.848379 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-06 01:05:39.848390 | orchestrator | Saturday 06 September 2025 01:04:17 +0000 (0:00:04.391) 0:00:27.405 **** 2025-09-06 01:05:39.848400 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.848411 | orchestrator | 2025-09-06 01:05:39.848422 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-06 01:05:39.848433 | orchestrator | Saturday 06 September 2025 01:04:21 +0000 (0:00:03.363) 0:00:30.768 **** 2025-09-06 01:05:39.848444 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.848455 | orchestrator | 2025-09-06 01:05:39.848466 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-06 01:05:39.848477 | orchestrator | Saturday 06 September 2025 01:04:25 +0000 (0:00:04.108) 0:00:34.877 **** 2025-09-06 01:05:39.848488 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.848499 | orchestrator | 2025-09-06 01:05:39.848510 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-06 01:05:39.848521 | orchestrator | Saturday 06 September 2025 01:04:29 +0000 (0:00:03.866) 0:00:38.743 **** 2025-09-06 01:05:39.848550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.848623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.848644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.848660 | orchestrator | 2025-09-06 01:05:39.848673 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-06 01:05:39.848713 | orchestrator | Saturday 06 September 2025 01:04:30 +0000 (0:00:01.619) 0:00:40.362 **** 2025-09-06 01:05:39.848727 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.848740 | orchestrator | 2025-09-06 01:05:39.848753 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-06 01:05:39.848766 | orchestrator | Saturday 06 September 2025 01:04:30 +0000 (0:00:00.146) 0:00:40.509 **** 2025-09-06 01:05:39.848778 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.848792 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:05:39.848804 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:05:39.848816 | orchestrator | 2025-09-06 01:05:39.848829 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-06 01:05:39.848846 | orchestrator | Saturday 06 September 2025 01:04:31 +0000 (0:00:00.493) 0:00:41.002 **** 2025-09-06 01:05:39.848860 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:05:39.848873 | orchestrator | 2025-09-06 01:05:39.848884 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-06 01:05:39.848895 | orchestrator | Saturday 06 September 2025 01:04:32 +0000 (0:00:00.842) 0:00:41.845 **** 2025-09-06 01:05:39.848906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.848950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.848978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.848990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849002 | orchestrator | 2025-09-06 01:05:39.849013 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-06 01:05:39.849024 | orchestrator | Saturday 06 September 2025 01:04:34 +0000 (0:00:02.316) 0:00:44.162 **** 2025-09-06 01:05:39.849035 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:05:39.849046 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:05:39.849057 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:05:39.849068 | orchestrator | 2025-09-06 01:05:39.849079 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-06 01:05:39.849089 | orchestrator | Saturday 06 September 2025 01:04:34 +0000 (0:00:00.309) 0:00:44.471 **** 2025-09-06 01:05:39.849100 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:05:39.849111 | orchestrator | 2025-09-06 01:05:39.849122 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-06 01:05:39.849132 | orchestrator | Saturday 06 September 2025 01:04:35 +0000 (0:00:00.715) 0:00:45.187 **** 2025-09-06 01:05:39.849144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849234 | orchestrator | 2025-09-06 01:05:39.849245 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-06 01:05:39.849256 | orchestrator | Saturday 06 September 2025 01:04:38 +0000 (0:00:02.535) 0:00:47.723 **** 2025-09-06 01:05:39.849273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849301 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.849313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849335 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:05:39.849347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849381 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:05:39.849392 | orchestrator | 2025-09-06 01:05:39.849403 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-06 01:05:39.849414 | orchestrator | Saturday 06 September 2025 01:04:38 +0000 (0:00:00.647) 0:00:48.370 **** 2025-09-06 01:05:39.849429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849451 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.849463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849491 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:05:39.849510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849538 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:05:39.849548 | orchestrator | 2025-09-06 01:05:39.849559 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-06 01:05:39.849570 | orchestrator | Saturday 06 September 2025 01:04:39 +0000 (0:00:01.043) 0:00:49.414 **** 2025-09-06 01:05:39.849581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849672 | orchestrator | 2025-09-06 01:05:39.849683 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-06 01:05:39.849708 | orchestrator | Saturday 06 September 2025 01:04:42 +0000 (0:00:02.521) 0:00:51.936 **** 2025-09-06 01:05:39.849720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.849765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.849804 | orchestrator | 2025-09-06 01:05:39.849815 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-06 01:05:39.849826 | orchestrator | Saturday 06 September 2025 01:04:48 +0000 (0:00:06.191) 0:00:58.127 **** 2025-09-06 01:05:39.849843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849873 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:05:39.849884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849912 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:05:39.849924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-06 01:05:39.849940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:05:39.849951 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.849963 | orchestrator | 2025-09-06 01:05:39.849973 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-06 01:05:39.849984 | orchestrator | Saturday 06 September 2025 01:04:49 +0000 (0:00:00.607) 0:00:58.735 **** 2025-09-06 01:05:39.850000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.850012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.850082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-06 01:05:39.850094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.850113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.850130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:05:39.850142 | orchestrator | 2025-09-06 01:05:39.850153 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-06 01:05:39.850172 | orchestrator | Saturday 06 September 2025 01:04:51 +0000 (0:00:02.291) 0:01:01.027 **** 2025-09-06 01:05:39.850183 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:05:39.850194 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:05:39.850205 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:05:39.850216 | orchestrator | 2025-09-06 01:05:39.850227 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-06 01:05:39.850237 | orchestrator | Saturday 06 September 2025 01:04:51 +0000 (0:00:00.249) 0:01:01.276 **** 2025-09-06 01:05:39.850248 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.850259 | orchestrator | 2025-09-06 01:05:39.850270 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-06 01:05:39.850281 | orchestrator | Saturday 06 September 2025 01:04:54 +0000 (0:00:02.335) 0:01:03.612 **** 2025-09-06 01:05:39.850291 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.850302 | orchestrator | 2025-09-06 01:05:39.850313 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-06 01:05:39.850324 | orchestrator | Saturday 06 September 2025 01:04:56 +0000 (0:00:02.322) 0:01:05.935 **** 2025-09-06 01:05:39.850334 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.850345 | orchestrator | 2025-09-06 01:05:39.850356 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-06 01:05:39.850367 | orchestrator | Saturday 06 September 2025 01:05:13 +0000 (0:00:16.629) 0:01:22.564 **** 2025-09-06 01:05:39.850377 | orchestrator | 2025-09-06 01:05:39.850388 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-06 01:05:39.850399 | orchestrator | Saturday 06 September 2025 01:05:13 +0000 (0:00:00.157) 0:01:22.722 **** 2025-09-06 01:05:39.850409 | orchestrator | 2025-09-06 01:05:39.850420 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-06 01:05:39.850431 | orchestrator | Saturday 06 September 2025 01:05:13 +0000 (0:00:00.106) 0:01:22.828 **** 2025-09-06 01:05:39.850441 | orchestrator | 2025-09-06 01:05:39.850452 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-06 01:05:39.850463 | orchestrator | Saturday 06 September 2025 01:05:13 +0000 (0:00:00.063) 0:01:22.892 **** 2025-09-06 01:05:39.850473 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.850484 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:05:39.850495 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:05:39.850506 | orchestrator | 2025-09-06 01:05:39.850516 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-06 01:05:39.850527 | orchestrator | Saturday 06 September 2025 01:05:28 +0000 (0:00:14.875) 0:01:37.767 **** 2025-09-06 01:05:39.850538 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:05:39.850548 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:05:39.850559 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:05:39.850570 | orchestrator | 2025-09-06 01:05:39.850580 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:05:39.850592 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-06 01:05:39.850603 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:05:39.850613 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:05:39.850624 | orchestrator | 2025-09-06 01:05:39.850635 | orchestrator | 2025-09-06 01:05:39.850645 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:05:39.850656 | orchestrator | Saturday 06 September 2025 01:05:39 +0000 (0:00:11.206) 0:01:48.974 **** 2025-09-06 01:05:39.850667 | orchestrator | =============================================================================== 2025-09-06 01:05:39.850684 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.63s 2025-09-06 01:05:39.850792 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.88s 2025-09-06 01:05:39.850805 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.21s 2025-09-06 01:05:39.850815 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.79s 2025-09-06 01:05:39.850826 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.19s 2025-09-06 01:05:39.850837 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.39s 2025-09-06 01:05:39.850847 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.11s 2025-09-06 01:05:39.850858 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.06s 2025-09-06 01:05:39.850869 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.87s 2025-09-06 01:05:39.850879 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.67s 2025-09-06 01:05:39.850896 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.57s 2025-09-06 01:05:39.850907 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.41s 2025-09-06 01:05:39.850918 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.36s 2025-09-06 01:05:39.850928 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.54s 2025-09-06 01:05:39.850939 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.52s 2025-09-06 01:05:39.850949 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.34s 2025-09-06 01:05:39.850960 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.32s 2025-09-06 01:05:39.850970 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.32s 2025-09-06 01:05:39.850981 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.29s 2025-09-06 01:05:39.850992 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.62s 2025-09-06 01:05:39.851002 | orchestrator | 2025-09-06 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:42.871588 | orchestrator | 2025-09-06 01:05:42 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:42.872027 | orchestrator | 2025-09-06 01:05:42 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:42.872654 | orchestrator | 2025-09-06 01:05:42 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:42.873251 | orchestrator | 2025-09-06 01:05:42 | INFO  | Task 134e20f4-61db-478a-8d66-645ef8f1b12d is in state STARTED 2025-09-06 01:05:42.873274 | orchestrator | 2025-09-06 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:45.895061 | orchestrator | 2025-09-06 01:05:45 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:45.895522 | orchestrator | 2025-09-06 01:05:45 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:45.896184 | orchestrator | 2025-09-06 01:05:45 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:45.897060 | orchestrator | 2025-09-06 01:05:45 | INFO  | Task 134e20f4-61db-478a-8d66-645ef8f1b12d is in state STARTED 2025-09-06 01:05:45.897083 | orchestrator | 2025-09-06 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:48.933276 | orchestrator | 2025-09-06 01:05:48 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:48.934186 | orchestrator | 2025-09-06 01:05:48 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:48.935974 | orchestrator | 2025-09-06 01:05:48 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:48.937441 | orchestrator | 2025-09-06 01:05:48 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:05:48.938997 | orchestrator | 2025-09-06 01:05:48 | INFO  | Task 134e20f4-61db-478a-8d66-645ef8f1b12d is in state SUCCESS 2025-09-06 01:05:48.939065 | orchestrator | 2025-09-06 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:51.974777 | orchestrator | 2025-09-06 01:05:51 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:51.976639 | orchestrator | 2025-09-06 01:05:51 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:51.979582 | orchestrator | 2025-09-06 01:05:51 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:51.981416 | orchestrator | 2025-09-06 01:05:51 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:05:51.981642 | orchestrator | 2025-09-06 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:55.033106 | orchestrator | 2025-09-06 01:05:55 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:55.034266 | orchestrator | 2025-09-06 01:05:55 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:55.038855 | orchestrator | 2025-09-06 01:05:55 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:55.039592 | orchestrator | 2025-09-06 01:05:55 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:05:55.039617 | orchestrator | 2025-09-06 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:05:58.080210 | orchestrator | 2025-09-06 01:05:58 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:05:58.081569 | orchestrator | 2025-09-06 01:05:58 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:05:58.083124 | orchestrator | 2025-09-06 01:05:58 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:05:58.084569 | orchestrator | 2025-09-06 01:05:58 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:05:58.084816 | orchestrator | 2025-09-06 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:01.134499 | orchestrator | 2025-09-06 01:06:01 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:01.136044 | orchestrator | 2025-09-06 01:06:01 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:01.137758 | orchestrator | 2025-09-06 01:06:01 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:01.139305 | orchestrator | 2025-09-06 01:06:01 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:01.139457 | orchestrator | 2025-09-06 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:04.192735 | orchestrator | 2025-09-06 01:06:04 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:04.197750 | orchestrator | 2025-09-06 01:06:04 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:04.201849 | orchestrator | 2025-09-06 01:06:04 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:04.203370 | orchestrator | 2025-09-06 01:06:04 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:04.203796 | orchestrator | 2025-09-06 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:07.252788 | orchestrator | 2025-09-06 01:06:07 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:07.254344 | orchestrator | 2025-09-06 01:06:07 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:07.256698 | orchestrator | 2025-09-06 01:06:07 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:07.258955 | orchestrator | 2025-09-06 01:06:07 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:07.259069 | orchestrator | 2025-09-06 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:10.302493 | orchestrator | 2025-09-06 01:06:10 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:10.304098 | orchestrator | 2025-09-06 01:06:10 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:10.305830 | orchestrator | 2025-09-06 01:06:10 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:10.306991 | orchestrator | 2025-09-06 01:06:10 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:10.307012 | orchestrator | 2025-09-06 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:13.360366 | orchestrator | 2025-09-06 01:06:13 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:13.361396 | orchestrator | 2025-09-06 01:06:13 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:13.363087 | orchestrator | 2025-09-06 01:06:13 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:13.365921 | orchestrator | 2025-09-06 01:06:13 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:13.365951 | orchestrator | 2025-09-06 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:16.415274 | orchestrator | 2025-09-06 01:06:16 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:16.416272 | orchestrator | 2025-09-06 01:06:16 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:16.417911 | orchestrator | 2025-09-06 01:06:16 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:16.419046 | orchestrator | 2025-09-06 01:06:16 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:16.419423 | orchestrator | 2025-09-06 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:19.465253 | orchestrator | 2025-09-06 01:06:19 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:19.469232 | orchestrator | 2025-09-06 01:06:19 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state STARTED 2025-09-06 01:06:19.472097 | orchestrator | 2025-09-06 01:06:19 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:19.472720 | orchestrator | 2025-09-06 01:06:19 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:19.473044 | orchestrator | 2025-09-06 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:22.523865 | orchestrator | 2025-09-06 01:06:22 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:22.527908 | orchestrator | 2025-09-06 01:06:22 | INFO  | Task dfd2161b-5599-483f-a5dd-494a5f4a3848 is in state SUCCESS 2025-09-06 01:06:22.530374 | orchestrator | 2025-09-06 01:06:22.530415 | orchestrator | 2025-09-06 01:06:22.530428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:06:22.530469 | orchestrator | 2025-09-06 01:06:22.530481 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:06:22.530493 | orchestrator | Saturday 06 September 2025 01:05:44 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-06 01:06:22.530504 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.530516 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:06:22.530527 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:06:22.530537 | orchestrator | 2025-09-06 01:06:22.530548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:06:22.530559 | orchestrator | Saturday 06 September 2025 01:05:44 +0000 (0:00:00.500) 0:00:00.764 **** 2025-09-06 01:06:22.530570 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-06 01:06:22.530581 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-06 01:06:22.530592 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-06 01:06:22.530632 | orchestrator | 2025-09-06 01:06:22.530644 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-06 01:06:22.530655 | orchestrator | 2025-09-06 01:06:22.530665 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-06 01:06:22.530676 | orchestrator | Saturday 06 September 2025 01:05:45 +0000 (0:00:00.870) 0:00:01.634 **** 2025-09-06 01:06:22.530687 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:06:22.530698 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:06:22.530708 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.530719 | orchestrator | 2025-09-06 01:06:22.530730 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:06:22.530742 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:06:22.530755 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:06:22.530766 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:06:22.531209 | orchestrator | 2025-09-06 01:06:22.531229 | orchestrator | 2025-09-06 01:06:22.531243 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:06:22.531256 | orchestrator | Saturday 06 September 2025 01:05:46 +0000 (0:00:00.668) 0:00:02.302 **** 2025-09-06 01:06:22.531271 | orchestrator | =============================================================================== 2025-09-06 01:06:22.531284 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-09-06 01:06:22.531295 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.67s 2025-09-06 01:06:22.531306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-09-06 01:06:22.531316 | orchestrator | 2025-09-06 01:06:22.531327 | orchestrator | 2025-09-06 01:06:22.531338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:06:22.531349 | orchestrator | 2025-09-06 01:06:22.531360 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-06 01:06:22.531371 | orchestrator | Saturday 06 September 2025 00:57:35 +0000 (0:00:00.223) 0:00:00.223 **** 2025-09-06 01:06:22.531381 | orchestrator | changed: [testbed-manager] 2025-09-06 01:06:22.531393 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.531403 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.531414 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.531466 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.531480 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.531491 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.531501 | orchestrator | 2025-09-06 01:06:22.531512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:06:22.531523 | orchestrator | Saturday 06 September 2025 00:57:36 +0000 (0:00:00.669) 0:00:00.892 **** 2025-09-06 01:06:22.531595 | orchestrator | changed: [testbed-manager] 2025-09-06 01:06:22.531642 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.531654 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.531665 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.531676 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.531686 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.531697 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.531708 | orchestrator | 2025-09-06 01:06:22.531719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:06:22.531729 | orchestrator | Saturday 06 September 2025 00:57:37 +0000 (0:00:00.582) 0:00:01.475 **** 2025-09-06 01:06:22.531740 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-06 01:06:22.531751 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-06 01:06:22.531762 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-06 01:06:22.531773 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-06 01:06:22.531783 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-06 01:06:22.531794 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-06 01:06:22.531805 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-06 01:06:22.532140 | orchestrator | 2025-09-06 01:06:22.532166 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-06 01:06:22.532178 | orchestrator | 2025-09-06 01:06:22.532189 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-06 01:06:22.532200 | orchestrator | Saturday 06 September 2025 00:57:37 +0000 (0:00:00.760) 0:00:02.236 **** 2025-09-06 01:06:22.532211 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.532222 | orchestrator | 2025-09-06 01:06:22.532232 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-06 01:06:22.532243 | orchestrator | Saturday 06 September 2025 00:57:38 +0000 (0:00:00.603) 0:00:02.840 **** 2025-09-06 01:06:22.532254 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-06 01:06:22.532297 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-06 01:06:22.532310 | orchestrator | 2025-09-06 01:06:22.532321 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-06 01:06:22.532332 | orchestrator | Saturday 06 September 2025 00:57:42 +0000 (0:00:03.598) 0:00:06.438 **** 2025-09-06 01:06:22.532343 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 01:06:22.532354 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-06 01:06:22.532365 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.532376 | orchestrator | 2025-09-06 01:06:22.532386 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-06 01:06:22.532397 | orchestrator | Saturday 06 September 2025 00:57:45 +0000 (0:00:03.690) 0:00:10.129 **** 2025-09-06 01:06:22.532408 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.532419 | orchestrator | 2025-09-06 01:06:22.532430 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-06 01:06:22.532440 | orchestrator | Saturday 06 September 2025 00:57:46 +0000 (0:00:00.634) 0:00:10.763 **** 2025-09-06 01:06:22.532451 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.532462 | orchestrator | 2025-09-06 01:06:22.532473 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-06 01:06:22.532554 | orchestrator | Saturday 06 September 2025 00:57:47 +0000 (0:00:01.348) 0:00:12.112 **** 2025-09-06 01:06:22.532570 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.532581 | orchestrator | 2025-09-06 01:06:22.532592 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-06 01:06:22.532636 | orchestrator | Saturday 06 September 2025 00:57:51 +0000 (0:00:03.276) 0:00:15.389 **** 2025-09-06 01:06:22.532648 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.532659 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.532670 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.532691 | orchestrator | 2025-09-06 01:06:22.532702 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-06 01:06:22.532713 | orchestrator | Saturday 06 September 2025 00:57:51 +0000 (0:00:00.390) 0:00:15.780 **** 2025-09-06 01:06:22.532724 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.532734 | orchestrator | 2025-09-06 01:06:22.532745 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-06 01:06:22.532756 | orchestrator | Saturday 06 September 2025 00:58:18 +0000 (0:00:26.749) 0:00:42.529 **** 2025-09-06 01:06:22.532769 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.532782 | orchestrator | 2025-09-06 01:06:22.532795 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-06 01:06:22.532809 | orchestrator | Saturday 06 September 2025 00:58:32 +0000 (0:00:14.357) 0:00:56.886 **** 2025-09-06 01:06:22.532821 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.532834 | orchestrator | 2025-09-06 01:06:22.532847 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-06 01:06:22.532860 | orchestrator | Saturday 06 September 2025 00:58:43 +0000 (0:00:10.878) 0:01:07.765 **** 2025-09-06 01:06:22.532872 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.532885 | orchestrator | 2025-09-06 01:06:22.532897 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-06 01:06:22.532910 | orchestrator | Saturday 06 September 2025 00:58:44 +0000 (0:00:01.133) 0:01:08.898 **** 2025-09-06 01:06:22.532922 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.532935 | orchestrator | 2025-09-06 01:06:22.532948 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-06 01:06:22.532960 | orchestrator | Saturday 06 September 2025 00:58:45 +0000 (0:00:00.516) 0:01:09.415 **** 2025-09-06 01:06:22.532973 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.532986 | orchestrator | 2025-09-06 01:06:22.532999 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-06 01:06:22.533012 | orchestrator | Saturday 06 September 2025 00:58:45 +0000 (0:00:00.475) 0:01:09.890 **** 2025-09-06 01:06:22.533025 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.533479 | orchestrator | 2025-09-06 01:06:22.533496 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-06 01:06:22.533507 | orchestrator | Saturday 06 September 2025 00:59:00 +0000 (0:00:15.346) 0:01:25.237 **** 2025-09-06 01:06:22.533518 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.533529 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.533540 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.533550 | orchestrator | 2025-09-06 01:06:22.533561 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-06 01:06:22.533572 | orchestrator | 2025-09-06 01:06:22.533583 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-06 01:06:22.533593 | orchestrator | Saturday 06 September 2025 00:59:01 +0000 (0:00:00.319) 0:01:25.557 **** 2025-09-06 01:06:22.533746 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.533950 | orchestrator | 2025-09-06 01:06:22.533967 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-06 01:06:22.533978 | orchestrator | Saturday 06 September 2025 00:59:01 +0000 (0:00:00.603) 0:01:26.161 **** 2025-09-06 01:06:22.533989 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534000 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534011 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.534055 | orchestrator | 2025-09-06 01:06:22.534076 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-06 01:06:22.534087 | orchestrator | Saturday 06 September 2025 00:59:03 +0000 (0:00:01.840) 0:01:28.002 **** 2025-09-06 01:06:22.534098 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534109 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534120 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.534145 | orchestrator | 2025-09-06 01:06:22.534157 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-06 01:06:22.534168 | orchestrator | Saturday 06 September 2025 00:59:05 +0000 (0:00:01.875) 0:01:29.877 **** 2025-09-06 01:06:22.534178 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.534189 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534281 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534297 | orchestrator | 2025-09-06 01:06:22.534308 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-06 01:06:22.534319 | orchestrator | Saturday 06 September 2025 00:59:05 +0000 (0:00:00.342) 0:01:30.219 **** 2025-09-06 01:06:22.534330 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-06 01:06:22.534341 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534352 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-06 01:06:22.534362 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534373 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-06 01:06:22.534384 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-06 01:06:22.534395 | orchestrator | 2025-09-06 01:06:22.534405 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-06 01:06:22.534416 | orchestrator | Saturday 06 September 2025 00:59:14 +0000 (0:00:08.582) 0:01:38.802 **** 2025-09-06 01:06:22.534427 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.534437 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534448 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534458 | orchestrator | 2025-09-06 01:06:22.534469 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-06 01:06:22.534479 | orchestrator | Saturday 06 September 2025 00:59:14 +0000 (0:00:00.301) 0:01:39.103 **** 2025-09-06 01:06:22.534490 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-06 01:06:22.534501 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.534511 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-06 01:06:22.534522 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534532 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-06 01:06:22.534543 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534553 | orchestrator | 2025-09-06 01:06:22.534564 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-06 01:06:22.534575 | orchestrator | Saturday 06 September 2025 00:59:15 +0000 (0:00:00.677) 0:01:39.781 **** 2025-09-06 01:06:22.534586 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534596 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534630 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.534641 | orchestrator | 2025-09-06 01:06:22.534651 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-06 01:06:22.534684 | orchestrator | Saturday 06 September 2025 00:59:16 +0000 (0:00:00.503) 0:01:40.285 **** 2025-09-06 01:06:22.534695 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534705 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534716 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.534727 | orchestrator | 2025-09-06 01:06:22.534738 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-06 01:06:22.534749 | orchestrator | Saturday 06 September 2025 00:59:17 +0000 (0:00:01.072) 0:01:41.357 **** 2025-09-06 01:06:22.534760 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534770 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534781 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.534792 | orchestrator | 2025-09-06 01:06:22.534803 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-06 01:06:22.534814 | orchestrator | Saturday 06 September 2025 00:59:19 +0000 (0:00:02.609) 0:01:43.967 **** 2025-09-06 01:06:22.534825 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534835 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534855 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.534866 | orchestrator | 2025-09-06 01:06:22.534877 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-06 01:06:22.534888 | orchestrator | Saturday 06 September 2025 00:59:40 +0000 (0:00:20.621) 0:02:04.589 **** 2025-09-06 01:06:22.534898 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534909 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534920 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.534930 | orchestrator | 2025-09-06 01:06:22.534941 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-06 01:06:22.534952 | orchestrator | Saturday 06 September 2025 00:59:52 +0000 (0:00:11.867) 0:02:16.456 **** 2025-09-06 01:06:22.534963 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.534973 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.534984 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.534995 | orchestrator | 2025-09-06 01:06:22.535006 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-06 01:06:22.535016 | orchestrator | Saturday 06 September 2025 00:59:53 +0000 (0:00:01.150) 0:02:17.606 **** 2025-09-06 01:06:22.535027 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.535038 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.535048 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.535059 | orchestrator | 2025-09-06 01:06:22.535070 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-06 01:06:22.535081 | orchestrator | Saturday 06 September 2025 01:00:05 +0000 (0:00:11.639) 0:02:29.246 **** 2025-09-06 01:06:22.535091 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.535102 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.535113 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.535123 | orchestrator | 2025-09-06 01:06:22.535134 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-06 01:06:22.535145 | orchestrator | Saturday 06 September 2025 01:00:05 +0000 (0:00:00.967) 0:02:30.214 **** 2025-09-06 01:06:22.535162 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.535173 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.535184 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.535195 | orchestrator | 2025-09-06 01:06:22.535206 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-06 01:06:22.535216 | orchestrator | 2025-09-06 01:06:22.535227 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-06 01:06:22.535238 | orchestrator | Saturday 06 September 2025 01:00:06 +0000 (0:00:00.448) 0:02:30.662 **** 2025-09-06 01:06:22.535249 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.535260 | orchestrator | 2025-09-06 01:06:22.535350 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-06 01:06:22.535366 | orchestrator | Saturday 06 September 2025 01:00:06 +0000 (0:00:00.497) 0:02:31.160 **** 2025-09-06 01:06:22.535377 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-06 01:06:22.535388 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-06 01:06:22.535399 | orchestrator | 2025-09-06 01:06:22.535410 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-06 01:06:22.535421 | orchestrator | Saturday 06 September 2025 01:00:10 +0000 (0:00:03.145) 0:02:34.306 **** 2025-09-06 01:06:22.535432 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-06 01:06:22.535444 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-06 01:06:22.535455 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-06 01:06:22.535467 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-06 01:06:22.535486 | orchestrator | 2025-09-06 01:06:22.535497 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-06 01:06:22.535507 | orchestrator | Saturday 06 September 2025 01:00:16 +0000 (0:00:06.473) 0:02:40.779 **** 2025-09-06 01:06:22.535518 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:06:22.535529 | orchestrator | 2025-09-06 01:06:22.535540 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-06 01:06:22.535551 | orchestrator | Saturday 06 September 2025 01:00:19 +0000 (0:00:03.214) 0:02:43.994 **** 2025-09-06 01:06:22.535561 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:06:22.535572 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-06 01:06:22.535583 | orchestrator | 2025-09-06 01:06:22.535594 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-06 01:06:22.535660 | orchestrator | Saturday 06 September 2025 01:00:23 +0000 (0:00:04.011) 0:02:48.005 **** 2025-09-06 01:06:22.535672 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:06:22.535683 | orchestrator | 2025-09-06 01:06:22.535694 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-06 01:06:22.535704 | orchestrator | Saturday 06 September 2025 01:00:27 +0000 (0:00:03.688) 0:02:51.694 **** 2025-09-06 01:06:22.535715 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-06 01:06:22.535726 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-06 01:06:22.535736 | orchestrator | 2025-09-06 01:06:22.535747 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-06 01:06:22.535758 | orchestrator | Saturday 06 September 2025 01:00:34 +0000 (0:00:07.308) 0:02:59.002 **** 2025-09-06 01:06:22.535775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.535879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.535907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.535920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.535933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.535946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.535957 | orchestrator | 2025-09-06 01:06:22.535968 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-06 01:06:22.535979 | orchestrator | Saturday 06 September 2025 01:00:36 +0000 (0:00:01.710) 0:03:00.713 **** 2025-09-06 01:06:22.536003 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.536014 | orchestrator | 2025-09-06 01:06:22.536025 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-06 01:06:22.536036 | orchestrator | Saturday 06 September 2025 01:00:36 +0000 (0:00:00.111) 0:03:00.825 **** 2025-09-06 01:06:22.536047 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.536057 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.536068 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.536085 | orchestrator | 2025-09-06 01:06:22.536097 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-06 01:06:22.536107 | orchestrator | Saturday 06 September 2025 01:00:36 +0000 (0:00:00.416) 0:03:01.241 **** 2025-09-06 01:06:22.536149 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:06:22.536162 | orchestrator | 2025-09-06 01:06:22.536173 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-06 01:06:22.536183 | orchestrator | Saturday 06 September 2025 01:00:38 +0000 (0:00:01.315) 0:03:02.556 **** 2025-09-06 01:06:22.536193 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.536202 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.536212 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.536221 | orchestrator | 2025-09-06 01:06:22.536231 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-06 01:06:22.536240 | orchestrator | Saturday 06 September 2025 01:00:38 +0000 (0:00:00.556) 0:03:03.113 **** 2025-09-06 01:06:22.536250 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.536277 | orchestrator | 2025-09-06 01:06:22.536288 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-06 01:06:22.536298 | orchestrator | Saturday 06 September 2025 01:00:39 +0000 (0:00:00.943) 0:03:04.056 **** 2025-09-06 01:06:22.536309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536419 | orchestrator | 2025-09-06 01:06:22.536429 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-06 01:06:22.536439 | orchestrator | Saturday 06 September 2025 01:00:42 +0000 (0:00:03.169) 0:03:07.225 **** 2025-09-06 01:06:22.536449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536481 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.536522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536550 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.536563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536597 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.536630 | orchestrator | 2025-09-06 01:06:22.536641 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-06 01:06:22.536653 | orchestrator | Saturday 06 September 2025 01:00:43 +0000 (0:00:00.864) 0:03:08.090 **** 2025-09-06 01:06:22.536694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536740 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.536752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536765 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.536808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.536823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.536835 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.536846 | orchestrator | 2025-09-06 01:06:22.536856 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-06 01:06:22.536865 | orchestrator | Saturday 06 September 2025 01:00:45 +0000 (0:00:01.414) 0:03:09.505 **** 2025-09-06 01:06:22.536876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.536945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.536982 | orchestrator | 2025-09-06 01:06:22.536992 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-06 01:06:22.537002 | orchestrator | Saturday 06 September 2025 01:00:47 +0000 (0:00:02.667) 0:03:12.172 **** 2025-09-06 01:06:22.537041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537150 | orchestrator | 2025-09-06 01:06:22.537160 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-06 01:06:22.537170 | orchestrator | Saturday 06 September 2025 01:00:58 +0000 (0:00:10.111) 0:03:22.284 **** 2025-09-06 01:06:22.537180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.537191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.537207 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.537217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.537232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.537242 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.537280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-06 01:06:22.537293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.537308 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.537318 | orchestrator | 2025-09-06 01:06:22.537328 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-06 01:06:22.537338 | orchestrator | Saturday 06 September 2025 01:00:59 +0000 (0:00:01.459) 0:03:23.743 **** 2025-09-06 01:06:22.537347 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.537357 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.537367 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.537376 | orchestrator | 2025-09-06 01:06:22.537386 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-06 01:06:22.537395 | orchestrator | Saturday 06 September 2025 01:01:01 +0000 (0:00:01.672) 0:03:25.416 **** 2025-09-06 01:06:22.537405 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.537414 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.537424 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.537434 | orchestrator | 2025-09-06 01:06:22.537443 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-06 01:06:22.537453 | orchestrator | Saturday 06 September 2025 01:01:01 +0000 (0:00:00.696) 0:03:26.112 **** 2025-09-06 01:06:22.537467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-06 01:06:22.537545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.537570 | orchestrator | 2025-09-06 01:06:22.537580 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-06 01:06:22.537590 | orchestrator | Saturday 06 September 2025 01:01:04 +0000 (0:00:02.812) 0:03:28.925 **** 2025-09-06 01:06:22.537614 | orchestrator | 2025-09-06 01:06:22.537625 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-06 01:06:22.537661 | orchestrator | Saturday 06 September 2025 01:01:04 +0000 (0:00:00.197) 0:03:29.122 **** 2025-09-06 01:06:22.537673 | orchestrator | 2025-09-06 01:06:22.537682 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-06 01:06:22.537692 | orchestrator | Saturday 06 September 2025 01:01:05 +0000 (0:00:00.361) 0:03:29.483 **** 2025-09-06 01:06:22.537701 | orchestrator | 2025-09-06 01:06:22.537711 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-06 01:06:22.537721 | orchestrator | Saturday 06 September 2025 01:01:05 +0000 (0:00:00.283) 0:03:29.767 **** 2025-09-06 01:06:22.537730 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.537740 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.537749 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.537759 | orchestrator | 2025-09-06 01:06:22.537768 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-06 01:06:22.537778 | orchestrator | Saturday 06 September 2025 01:01:28 +0000 (0:00:22.941) 0:03:52.708 **** 2025-09-06 01:06:22.537793 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.537803 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.537812 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.537822 | orchestrator | 2025-09-06 01:06:22.537831 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-06 01:06:22.537840 | orchestrator | 2025-09-06 01:06:22.537850 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-06 01:06:22.537859 | orchestrator | Saturday 06 September 2025 01:01:40 +0000 (0:00:12.155) 0:04:04.864 **** 2025-09-06 01:06:22.537869 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.537880 | orchestrator | 2025-09-06 01:06:22.537889 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-06 01:06:22.537899 | orchestrator | Saturday 06 September 2025 01:01:42 +0000 (0:00:01.528) 0:04:06.392 **** 2025-09-06 01:06:22.537909 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.537918 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.537928 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.537937 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.537946 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.537956 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.537965 | orchestrator | 2025-09-06 01:06:22.537975 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-06 01:06:22.537984 | orchestrator | Saturday 06 September 2025 01:01:43 +0000 (0:00:00.927) 0:04:07.320 **** 2025-09-06 01:06:22.537994 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.538003 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.538012 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.538049 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:06:22.538059 | orchestrator | 2025-09-06 01:06:22.538069 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-06 01:06:22.538079 | orchestrator | Saturday 06 September 2025 01:01:44 +0000 (0:00:01.064) 0:04:08.384 **** 2025-09-06 01:06:22.538088 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-06 01:06:22.538098 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-06 01:06:22.538107 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-06 01:06:22.538117 | orchestrator | 2025-09-06 01:06:22.538126 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-06 01:06:22.538135 | orchestrator | Saturday 06 September 2025 01:01:45 +0000 (0:00:00.981) 0:04:09.366 **** 2025-09-06 01:06:22.538145 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-06 01:06:22.538155 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-06 01:06:22.538164 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-06 01:06:22.538174 | orchestrator | 2025-09-06 01:06:22.538183 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-06 01:06:22.538193 | orchestrator | Saturday 06 September 2025 01:01:46 +0000 (0:00:01.477) 0:04:10.843 **** 2025-09-06 01:06:22.538202 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-06 01:06:22.538212 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.538221 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-06 01:06:22.538231 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.538240 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-06 01:06:22.538250 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.538259 | orchestrator | 2025-09-06 01:06:22.538268 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-06 01:06:22.538278 | orchestrator | Saturday 06 September 2025 01:01:48 +0000 (0:00:01.991) 0:04:12.835 **** 2025-09-06 01:06:22.538287 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 01:06:22.538303 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 01:06:22.538312 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.538322 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 01:06:22.538331 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 01:06:22.538341 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.538354 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-06 01:06:22.538364 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-06 01:06:22.538373 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.538383 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-06 01:06:22.538393 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-06 01:06:22.538402 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-06 01:06:22.538440 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-06 01:06:22.538451 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-06 01:06:22.538460 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-06 01:06:22.538470 | orchestrator | 2025-09-06 01:06:22.538480 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-06 01:06:22.538489 | orchestrator | Saturday 06 September 2025 01:01:51 +0000 (0:00:02.657) 0:04:15.493 **** 2025-09-06 01:06:22.538499 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.538508 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.538518 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.538527 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.538537 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.538546 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.538556 | orchestrator | 2025-09-06 01:06:22.538565 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-06 01:06:22.538575 | orchestrator | Saturday 06 September 2025 01:01:53 +0000 (0:00:01.998) 0:04:17.491 **** 2025-09-06 01:06:22.538585 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.538594 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.538619 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.538629 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.538639 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.538648 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.538658 | orchestrator | 2025-09-06 01:06:22.538667 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-06 01:06:22.538677 | orchestrator | Saturday 06 September 2025 01:01:55 +0000 (0:00:02.334) 0:04:19.826 **** 2025-09-06 01:06:22.538687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.538930 | orchestrator | 2025-09-06 01:06:22.538939 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-06 01:06:22.538949 | orchestrator | Saturday 06 September 2025 01:01:58 +0000 (0:00:03.011) 0:04:22.837 **** 2025-09-06 01:06:22.538959 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:06:22.538970 | orchestrator | 2025-09-06 01:06:22.538987 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-06 01:06:22.538997 | orchestrator | Saturday 06 September 2025 01:01:59 +0000 (0:00:01.132) 0:04:23.970 **** 2025-09-06 01:06:22.539032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.539266 | orchestrator | 2025-09-06 01:06:22.539276 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-06 01:06:22.539286 | orchestrator | Saturday 06 September 2025 01:02:03 +0000 (0:00:04.104) 0:04:28.075 **** 2025-09-06 01:06:22.539296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539356 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.539368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539406 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.539416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539480 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.539490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539516 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.539526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539546 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.539556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539660 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.539670 | orchestrator | 2025-09-06 01:06:22.539680 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-06 01:06:22.539690 | orchestrator | Saturday 06 September 2025 01:02:06 +0000 (0:00:02.547) 0:04:30.623 **** 2025-09-06 01:06:22.539701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539738 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.539747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539772 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.539812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.539860 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.539870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.539909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539930 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.539941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539961 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.539971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.539981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.539991 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.540000 | orchestrator | 2025-09-06 01:06:22.540010 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-06 01:06:22.540020 | orchestrator | Saturday 06 September 2025 01:02:08 +0000 (0:00:02.379) 0:04:33.002 **** 2025-09-06 01:06:22.540030 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.540039 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.540048 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.540058 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-06 01:06:22.540068 | orchestrator | 2025-09-06 01:06:22.540077 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-06 01:06:22.540087 | orchestrator | Saturday 06 September 2025 01:02:09 +0000 (0:00:00.954) 0:04:33.957 **** 2025-09-06 01:06:22.540097 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 01:06:22.540106 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-06 01:06:22.540115 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-06 01:06:22.540125 | orchestrator | 2025-09-06 01:06:22.540134 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-06 01:06:22.540144 | orchestrator | Saturday 06 September 2025 01:02:10 +0000 (0:00:00.851) 0:04:34.808 **** 2025-09-06 01:06:22.540159 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 01:06:22.540169 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-06 01:06:22.540178 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-06 01:06:22.540188 | orchestrator | 2025-09-06 01:06:22.540201 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-06 01:06:22.540211 | orchestrator | Saturday 06 September 2025 01:02:11 +0000 (0:00:00.685) 0:04:35.494 **** 2025-09-06 01:06:22.540221 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:06:22.540230 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:06:22.540240 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:06:22.540249 | orchestrator | 2025-09-06 01:06:22.540259 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-06 01:06:22.540269 | orchestrator | Saturday 06 September 2025 01:02:11 +0000 (0:00:00.588) 0:04:36.083 **** 2025-09-06 01:06:22.540278 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:06:22.540288 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:06:22.540297 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:06:22.540306 | orchestrator | 2025-09-06 01:06:22.540341 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-06 01:06:22.540352 | orchestrator | Saturday 06 September 2025 01:02:12 +0000 (0:00:00.625) 0:04:36.709 **** 2025-09-06 01:06:22.540362 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-06 01:06:22.540371 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-06 01:06:22.540381 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-06 01:06:22.540391 | orchestrator | 2025-09-06 01:06:22.540400 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-06 01:06:22.540410 | orchestrator | Saturday 06 September 2025 01:02:13 +0000 (0:00:01.126) 0:04:37.836 **** 2025-09-06 01:06:22.540420 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-06 01:06:22.540429 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-06 01:06:22.540439 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-06 01:06:22.540448 | orchestrator | 2025-09-06 01:06:22.540458 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-06 01:06:22.540467 | orchestrator | Saturday 06 September 2025 01:02:14 +0000 (0:00:01.132) 0:04:38.968 **** 2025-09-06 01:06:22.540477 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-06 01:06:22.540487 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-06 01:06:22.540496 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-06 01:06:22.540505 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-06 01:06:22.540515 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-06 01:06:22.540525 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-06 01:06:22.540534 | orchestrator | 2025-09-06 01:06:22.540544 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-06 01:06:22.540553 | orchestrator | Saturday 06 September 2025 01:02:18 +0000 (0:00:03.806) 0:04:42.774 **** 2025-09-06 01:06:22.540563 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.540572 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.540581 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.540591 | orchestrator | 2025-09-06 01:06:22.540648 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-06 01:06:22.540660 | orchestrator | Saturday 06 September 2025 01:02:18 +0000 (0:00:00.424) 0:04:43.199 **** 2025-09-06 01:06:22.540670 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.540679 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.540689 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.540698 | orchestrator | 2025-09-06 01:06:22.540708 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-06 01:06:22.540718 | orchestrator | Saturday 06 September 2025 01:02:19 +0000 (0:00:00.315) 0:04:43.514 **** 2025-09-06 01:06:22.540734 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.540744 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.540754 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.540763 | orchestrator | 2025-09-06 01:06:22.540773 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-06 01:06:22.540782 | orchestrator | Saturday 06 September 2025 01:02:20 +0000 (0:00:01.121) 0:04:44.636 **** 2025-09-06 01:06:22.540792 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-06 01:06:22.540803 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-06 01:06:22.540813 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-06 01:06:22.540823 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-06 01:06:22.540832 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-06 01:06:22.540842 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-06 01:06:22.540852 | orchestrator | 2025-09-06 01:06:22.540861 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-06 01:06:22.540871 | orchestrator | Saturday 06 September 2025 01:02:23 +0000 (0:00:03.297) 0:04:47.933 **** 2025-09-06 01:06:22.540881 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 01:06:22.540890 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 01:06:22.540900 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 01:06:22.540909 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-06 01:06:22.540919 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.540928 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-06 01:06:22.540938 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.540947 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-06 01:06:22.540962 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.540971 | orchestrator | 2025-09-06 01:06:22.540981 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-06 01:06:22.540991 | orchestrator | Saturday 06 September 2025 01:02:27 +0000 (0:00:03.670) 0:04:51.603 **** 2025-09-06 01:06:22.541000 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.541010 | orchestrator | 2025-09-06 01:06:22.541020 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-06 01:06:22.541029 | orchestrator | Saturday 06 September 2025 01:02:27 +0000 (0:00:00.150) 0:04:51.754 **** 2025-09-06 01:06:22.541039 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.541048 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.541058 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.541098 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.541109 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.541119 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.541128 | orchestrator | 2025-09-06 01:06:22.541138 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-06 01:06:22.541147 | orchestrator | Saturday 06 September 2025 01:02:28 +0000 (0:00:00.605) 0:04:52.359 **** 2025-09-06 01:06:22.541157 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-06 01:06:22.541166 | orchestrator | 2025-09-06 01:06:22.541175 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-06 01:06:22.541183 | orchestrator | Saturday 06 September 2025 01:02:28 +0000 (0:00:00.672) 0:04:53.032 **** 2025-09-06 01:06:22.541191 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.541198 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.541211 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.541219 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.541227 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.541234 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.541242 | orchestrator | 2025-09-06 01:06:22.541250 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-06 01:06:22.541258 | orchestrator | Saturday 06 September 2025 01:02:29 +0000 (0:00:00.812) 0:04:53.844 **** 2025-09-06 01:06:22.541266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541421 | orchestrator | 2025-09-06 01:06:22.541429 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-06 01:06:22.541437 | orchestrator | Saturday 06 September 2025 01:02:33 +0000 (0:00:03.850) 0:04:57.694 **** 2025-09-06 01:06:22.541449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.541466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.541475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.541483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.541491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.541500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.541516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.541625 | orchestrator | 2025-09-06 01:06:22.541633 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-06 01:06:22.541641 | orchestrator | Saturday 06 September 2025 01:02:39 +0000 (0:00:06.352) 0:05:04.047 **** 2025-09-06 01:06:22.541649 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.541657 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.541664 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.541672 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.541680 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.541687 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.541695 | orchestrator | 2025-09-06 01:06:22.541703 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-06 01:06:22.541711 | orchestrator | Saturday 06 September 2025 01:02:41 +0000 (0:00:01.439) 0:05:05.487 **** 2025-09-06 01:06:22.541719 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-06 01:06:22.541726 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-06 01:06:22.541734 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-06 01:06:22.541742 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-06 01:06:22.541750 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-06 01:06:22.541758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-06 01:06:22.541765 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-06 01:06:22.541773 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.541781 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-06 01:06:22.541789 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.541797 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-06 01:06:22.541805 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.541812 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-06 01:06:22.541821 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-06 01:06:22.541834 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-06 01:06:22.541843 | orchestrator | 2025-09-06 01:06:22.541850 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-06 01:06:22.541858 | orchestrator | Saturday 06 September 2025 01:02:44 +0000 (0:00:03.611) 0:05:09.099 **** 2025-09-06 01:06:22.541866 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.541874 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.541882 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.541889 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.541897 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.541905 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.541912 | orchestrator | 2025-09-06 01:06:22.541920 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-06 01:06:22.541928 | orchestrator | Saturday 06 September 2025 01:02:45 +0000 (0:00:00.609) 0:05:09.708 **** 2025-09-06 01:06:22.541936 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-06 01:06:22.541948 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-06 01:06:22.541955 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-06 01:06:22.541963 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-06 01:06:22.541971 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-06 01:06:22.541982 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-06 01:06:22.541991 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.541998 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.542006 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542014 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542044 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.542052 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.542060 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542068 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542075 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.542083 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542091 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-06 01:06:22.542099 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542107 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542114 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542122 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-06 01:06:22.542130 | orchestrator | 2025-09-06 01:06:22.542138 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-06 01:06:22.542151 | orchestrator | Saturday 06 September 2025 01:02:52 +0000 (0:00:06.886) 0:05:16.594 **** 2025-09-06 01:06:22.542159 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 01:06:22.542167 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 01:06:22.542175 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 01:06:22.542183 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-06 01:06:22.542191 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 01:06:22.542198 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-06 01:06:22.542206 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-06 01:06:22.542214 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-06 01:06:22.542222 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-06 01:06:22.542230 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 01:06:22.542237 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 01:06:22.542245 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 01:06:22.542253 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-06 01:06:22.542261 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 01:06:22.542268 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-06 01:06:22.542276 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542284 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-06 01:06:22.542292 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-06 01:06:22.542299 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542307 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-06 01:06:22.542315 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542326 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 01:06:22.542334 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 01:06:22.542342 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-06 01:06:22.542350 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 01:06:22.542357 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 01:06:22.542369 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-06 01:06:22.542377 | orchestrator | 2025-09-06 01:06:22.542385 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-06 01:06:22.542393 | orchestrator | Saturday 06 September 2025 01:03:00 +0000 (0:00:08.007) 0:05:24.602 **** 2025-09-06 01:06:22.542401 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.542408 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.542416 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.542424 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542432 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542439 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542447 | orchestrator | 2025-09-06 01:06:22.542455 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-06 01:06:22.542463 | orchestrator | Saturday 06 September 2025 01:03:01 +0000 (0:00:00.645) 0:05:25.247 **** 2025-09-06 01:06:22.542476 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.542483 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.542491 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.542499 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542506 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542514 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542522 | orchestrator | 2025-09-06 01:06:22.542530 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-06 01:06:22.542538 | orchestrator | Saturday 06 September 2025 01:03:01 +0000 (0:00:00.558) 0:05:25.806 **** 2025-09-06 01:06:22.542546 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542553 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542561 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542569 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.542577 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.542584 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.542592 | orchestrator | 2025-09-06 01:06:22.542614 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-06 01:06:22.542623 | orchestrator | Saturday 06 September 2025 01:03:03 +0000 (0:00:01.886) 0:05:27.693 **** 2025-09-06 01:06:22.542631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.542640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.542652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542661 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.542674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.542688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.542696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-06 01:06:22.542705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-06 01:06:22.542721 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.542733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542750 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.542759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.542767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542775 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.542792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542800 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.542808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-06 01:06:22.542820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-06 01:06:22.542842 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.542856 | orchestrator | 2025-09-06 01:06:22.542870 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-06 01:06:22.542883 | orchestrator | Saturday 06 September 2025 01:03:04 +0000 (0:00:01.204) 0:05:28.897 **** 2025-09-06 01:06:22.542896 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-06 01:06:22.542904 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-06 01:06:22.542912 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.542920 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-06 01:06:22.542932 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-06 01:06:22.542941 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.542948 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-06 01:06:22.542956 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-06 01:06:22.542964 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.542972 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-06 01:06:22.542980 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-06 01:06:22.542987 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.542995 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-06 01:06:22.543003 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-06 01:06:22.543011 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.543018 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-06 01:06:22.543026 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-06 01:06:22.543034 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.543042 | orchestrator | 2025-09-06 01:06:22.543049 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-06 01:06:22.543057 | orchestrator | Saturday 06 September 2025 01:03:05 +0000 (0:00:00.862) 0:05:29.760 **** 2025-09-06 01:06:22.543065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-06 01:06:22.543223 | orchestrator | 2025-09-06 01:06:22.543231 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-06 01:06:22.543239 | orchestrator | Saturday 06 September 2025 01:03:08 +0000 (0:00:02.790) 0:05:32.550 **** 2025-09-06 01:06:22.543247 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.543255 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.543263 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.543271 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.543278 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.543286 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.543294 | orchestrator | 2025-09-06 01:06:22.543302 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543310 | orchestrator | Saturday 06 September 2025 01:03:09 +0000 (0:00:01.060) 0:05:33.611 **** 2025-09-06 01:06:22.543317 | orchestrator | 2025-09-06 01:06:22.543328 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543336 | orchestrator | Saturday 06 September 2025 01:03:09 +0000 (0:00:00.156) 0:05:33.768 **** 2025-09-06 01:06:22.543344 | orchestrator | 2025-09-06 01:06:22.543352 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543360 | orchestrator | Saturday 06 September 2025 01:03:09 +0000 (0:00:00.130) 0:05:33.899 **** 2025-09-06 01:06:22.543368 | orchestrator | 2025-09-06 01:06:22.543376 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543384 | orchestrator | Saturday 06 September 2025 01:03:09 +0000 (0:00:00.134) 0:05:34.033 **** 2025-09-06 01:06:22.543391 | orchestrator | 2025-09-06 01:06:22.543403 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543411 | orchestrator | Saturday 06 September 2025 01:03:09 +0000 (0:00:00.128) 0:05:34.162 **** 2025-09-06 01:06:22.543419 | orchestrator | 2025-09-06 01:06:22.543427 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-06 01:06:22.543434 | orchestrator | Saturday 06 September 2025 01:03:10 +0000 (0:00:00.128) 0:05:34.290 **** 2025-09-06 01:06:22.543442 | orchestrator | 2025-09-06 01:06:22.543450 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-06 01:06:22.543458 | orchestrator | Saturday 06 September 2025 01:03:10 +0000 (0:00:00.217) 0:05:34.508 **** 2025-09-06 01:06:22.543465 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.543473 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.543481 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.543489 | orchestrator | 2025-09-06 01:06:22.543497 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-06 01:06:22.543504 | orchestrator | Saturday 06 September 2025 01:03:22 +0000 (0:00:12.476) 0:05:46.984 **** 2025-09-06 01:06:22.543512 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.543520 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.543528 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.543535 | orchestrator | 2025-09-06 01:06:22.543543 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-06 01:06:22.543551 | orchestrator | Saturday 06 September 2025 01:03:34 +0000 (0:00:11.733) 0:05:58.718 **** 2025-09-06 01:06:22.543559 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.543567 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.543580 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.543587 | orchestrator | 2025-09-06 01:06:22.543595 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-06 01:06:22.543642 | orchestrator | Saturday 06 September 2025 01:03:59 +0000 (0:00:25.486) 0:06:24.204 **** 2025-09-06 01:06:22.543651 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.543659 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.543666 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.543674 | orchestrator | 2025-09-06 01:06:22.543682 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-06 01:06:22.543690 | orchestrator | Saturday 06 September 2025 01:04:36 +0000 (0:00:36.760) 0:07:00.964 **** 2025-09-06 01:06:22.543697 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-06 01:06:22.543706 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-06 01:06:22.543714 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-06 01:06:22.543722 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.543729 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.543737 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.543745 | orchestrator | 2025-09-06 01:06:22.543753 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-06 01:06:22.543761 | orchestrator | Saturday 06 September 2025 01:04:43 +0000 (0:00:06.298) 0:07:07.263 **** 2025-09-06 01:06:22.543769 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.543776 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.543784 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.543792 | orchestrator | 2025-09-06 01:06:22.543800 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-06 01:06:22.543807 | orchestrator | Saturday 06 September 2025 01:04:43 +0000 (0:00:00.855) 0:07:08.118 **** 2025-09-06 01:06:22.543815 | orchestrator | changed: [testbed-node-3] 2025-09-06 01:06:22.543823 | orchestrator | changed: [testbed-node-5] 2025-09-06 01:06:22.543831 | orchestrator | changed: [testbed-node-4] 2025-09-06 01:06:22.543839 | orchestrator | 2025-09-06 01:06:22.543846 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-06 01:06:22.543854 | orchestrator | Saturday 06 September 2025 01:05:10 +0000 (0:00:26.373) 0:07:34.492 **** 2025-09-06 01:06:22.543862 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.543870 | orchestrator | 2025-09-06 01:06:22.543878 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-06 01:06:22.543886 | orchestrator | Saturday 06 September 2025 01:05:10 +0000 (0:00:00.121) 0:07:34.613 **** 2025-09-06 01:06:22.543893 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.543901 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.543909 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.543917 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.543924 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.543932 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-06 01:06:22.543940 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 01:06:22.543948 | orchestrator | 2025-09-06 01:06:22.543956 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-06 01:06:22.543963 | orchestrator | Saturday 06 September 2025 01:05:32 +0000 (0:00:21.995) 0:07:56.609 **** 2025-09-06 01:06:22.543971 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.543979 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.543987 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.543995 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544007 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.544015 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544023 | orchestrator | 2025-09-06 01:06:22.544036 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-06 01:06:22.544044 | orchestrator | Saturday 06 September 2025 01:05:41 +0000 (0:00:09.398) 0:08:06.007 **** 2025-09-06 01:06:22.544052 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.544059 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.544067 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544075 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.544083 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544095 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-06 01:06:22.544103 | orchestrator | 2025-09-06 01:06:22.544111 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-06 01:06:22.544118 | orchestrator | Saturday 06 September 2025 01:05:45 +0000 (0:00:04.084) 0:08:10.092 **** 2025-09-06 01:06:22.544126 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 01:06:22.544134 | orchestrator | 2025-09-06 01:06:22.544140 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-06 01:06:22.544147 | orchestrator | Saturday 06 September 2025 01:05:58 +0000 (0:00:12.162) 0:08:22.255 **** 2025-09-06 01:06:22.544154 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 01:06:22.544160 | orchestrator | 2025-09-06 01:06:22.544167 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-06 01:06:22.544173 | orchestrator | Saturday 06 September 2025 01:05:59 +0000 (0:00:01.300) 0:08:23.555 **** 2025-09-06 01:06:22.544180 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.544187 | orchestrator | 2025-09-06 01:06:22.544193 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-06 01:06:22.544200 | orchestrator | Saturday 06 September 2025 01:06:00 +0000 (0:00:01.343) 0:08:24.899 **** 2025-09-06 01:06:22.544206 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-06 01:06:22.544213 | orchestrator | 2025-09-06 01:06:22.544219 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-06 01:06:22.544226 | orchestrator | Saturday 06 September 2025 01:06:12 +0000 (0:00:11.932) 0:08:36.831 **** 2025-09-06 01:06:22.544233 | orchestrator | ok: [testbed-node-3] 2025-09-06 01:06:22.544239 | orchestrator | ok: [testbed-node-4] 2025-09-06 01:06:22.544246 | orchestrator | ok: [testbed-node-5] 2025-09-06 01:06:22.544253 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:06:22.544259 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:06:22.544266 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:06:22.544272 | orchestrator | 2025-09-06 01:06:22.544279 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-06 01:06:22.544286 | orchestrator | 2025-09-06 01:06:22.544292 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-06 01:06:22.544299 | orchestrator | Saturday 06 September 2025 01:06:14 +0000 (0:00:01.869) 0:08:38.701 **** 2025-09-06 01:06:22.544306 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:06:22.544312 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:06:22.544319 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:06:22.544326 | orchestrator | 2025-09-06 01:06:22.544332 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-06 01:06:22.544339 | orchestrator | 2025-09-06 01:06:22.544345 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-06 01:06:22.544352 | orchestrator | Saturday 06 September 2025 01:06:15 +0000 (0:00:01.086) 0:08:39.787 **** 2025-09-06 01:06:22.544359 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544365 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544372 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.544379 | orchestrator | 2025-09-06 01:06:22.544385 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-06 01:06:22.544392 | orchestrator | 2025-09-06 01:06:22.544398 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-06 01:06:22.544409 | orchestrator | Saturday 06 September 2025 01:06:16 +0000 (0:00:00.501) 0:08:40.289 **** 2025-09-06 01:06:22.544416 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-06 01:06:22.544422 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-06 01:06:22.544429 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544435 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-06 01:06:22.544442 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-06 01:06:22.544449 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544455 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-06 01:06:22.544462 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-06 01:06:22.544468 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544475 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-06 01:06:22.544481 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-06 01:06:22.544488 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544495 | orchestrator | skipping: [testbed-node-3] 2025-09-06 01:06:22.544501 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-06 01:06:22.544508 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-06 01:06:22.544514 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544521 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-06 01:06:22.544527 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-06 01:06:22.544534 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544541 | orchestrator | skipping: [testbed-node-4] 2025-09-06 01:06:22.544547 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-06 01:06:22.544557 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-06 01:06:22.544564 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544570 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-06 01:06:22.544577 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-06 01:06:22.544583 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544590 | orchestrator | skipping: [testbed-node-5] 2025-09-06 01:06:22.544597 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-06 01:06:22.544616 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-06 01:06:22.544623 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544633 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-06 01:06:22.544640 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-06 01:06:22.544647 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544654 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544660 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544667 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-06 01:06:22.544673 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-06 01:06:22.544680 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-06 01:06:22.544687 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-06 01:06:22.544693 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-06 01:06:22.544699 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-06 01:06:22.544706 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.544713 | orchestrator | 2025-09-06 01:06:22.544719 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-06 01:06:22.544730 | orchestrator | 2025-09-06 01:06:22.544737 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-06 01:06:22.544743 | orchestrator | Saturday 06 September 2025 01:06:17 +0000 (0:00:01.299) 0:08:41.589 **** 2025-09-06 01:06:22.544750 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-06 01:06:22.544756 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-06 01:06:22.544763 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544770 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-06 01:06:22.544776 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-06 01:06:22.544783 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544790 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-06 01:06:22.544796 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-06 01:06:22.544803 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.544809 | orchestrator | 2025-09-06 01:06:22.544816 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-06 01:06:22.544822 | orchestrator | 2025-09-06 01:06:22.544829 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-06 01:06:22.544836 | orchestrator | Saturday 06 September 2025 01:06:18 +0000 (0:00:00.739) 0:08:42.329 **** 2025-09-06 01:06:22.544842 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544849 | orchestrator | 2025-09-06 01:06:22.544855 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-06 01:06:22.544862 | orchestrator | 2025-09-06 01:06:22.544869 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-06 01:06:22.544875 | orchestrator | Saturday 06 September 2025 01:06:18 +0000 (0:00:00.651) 0:08:42.980 **** 2025-09-06 01:06:22.544882 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:06:22.544888 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:06:22.544895 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:06:22.544902 | orchestrator | 2025-09-06 01:06:22.544908 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:06:22.544915 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:06:22.544922 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-06 01:06:22.544929 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-06 01:06:22.544936 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-06 01:06:22.544942 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-06 01:06:22.544949 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-06 01:06:22.544955 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-06 01:06:22.544962 | orchestrator | 2025-09-06 01:06:22.544969 | orchestrator | 2025-09-06 01:06:22.544975 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:06:22.544982 | orchestrator | Saturday 06 September 2025 01:06:19 +0000 (0:00:00.436) 0:08:43.417 **** 2025-09-06 01:06:22.544989 | orchestrator | =============================================================================== 2025-09-06 01:06:22.544999 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.76s 2025-09-06 01:06:22.545005 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.75s 2025-09-06 01:06:22.545016 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.37s 2025-09-06 01:06:22.545023 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.49s 2025-09-06 01:06:22.545029 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.94s 2025-09-06 01:06:22.545036 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.00s 2025-09-06 01:06:22.545042 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.62s 2025-09-06 01:06:22.545052 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.35s 2025-09-06 01:06:22.545059 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.36s 2025-09-06 01:06:22.545065 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.48s 2025-09-06 01:06:22.545072 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.16s 2025-09-06 01:06:22.545078 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.16s 2025-09-06 01:06:22.545085 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.93s 2025-09-06 01:06:22.545092 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.87s 2025-09-06 01:06:22.545098 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.73s 2025-09-06 01:06:22.545105 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.64s 2025-09-06 01:06:22.545111 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.88s 2025-09-06 01:06:22.545118 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.11s 2025-09-06 01:06:22.545125 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.40s 2025-09-06 01:06:22.545131 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.58s 2025-09-06 01:06:22.545138 | orchestrator | 2025-09-06 01:06:22 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:22.545145 | orchestrator | 2025-09-06 01:06:22 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:22.545151 | orchestrator | 2025-09-06 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:25.577305 | orchestrator | 2025-09-06 01:06:25 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:25.579634 | orchestrator | 2025-09-06 01:06:25 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:25.581763 | orchestrator | 2025-09-06 01:06:25 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:25.581787 | orchestrator | 2025-09-06 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:28.632050 | orchestrator | 2025-09-06 01:06:28 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:28.633653 | orchestrator | 2025-09-06 01:06:28 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:28.637307 | orchestrator | 2025-09-06 01:06:28 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:28.637339 | orchestrator | 2025-09-06 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:31.685244 | orchestrator | 2025-09-06 01:06:31 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:31.686626 | orchestrator | 2025-09-06 01:06:31 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:31.688296 | orchestrator | 2025-09-06 01:06:31 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:31.688320 | orchestrator | 2025-09-06 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:34.733828 | orchestrator | 2025-09-06 01:06:34 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:34.735191 | orchestrator | 2025-09-06 01:06:34 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:34.736681 | orchestrator | 2025-09-06 01:06:34 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:34.736710 | orchestrator | 2025-09-06 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:37.780700 | orchestrator | 2025-09-06 01:06:37 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:37.782272 | orchestrator | 2025-09-06 01:06:37 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:37.783750 | orchestrator | 2025-09-06 01:06:37 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:37.783832 | orchestrator | 2025-09-06 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:40.821223 | orchestrator | 2025-09-06 01:06:40 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:40.824472 | orchestrator | 2025-09-06 01:06:40 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:40.824508 | orchestrator | 2025-09-06 01:06:40 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:40.824522 | orchestrator | 2025-09-06 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:43.871404 | orchestrator | 2025-09-06 01:06:43 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:43.872041 | orchestrator | 2025-09-06 01:06:43 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:43.874495 | orchestrator | 2025-09-06 01:06:43 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:43.874716 | orchestrator | 2025-09-06 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:46.918358 | orchestrator | 2025-09-06 01:06:46 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:46.920151 | orchestrator | 2025-09-06 01:06:46 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:46.921784 | orchestrator | 2025-09-06 01:06:46 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:46.921806 | orchestrator | 2025-09-06 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:49.970966 | orchestrator | 2025-09-06 01:06:49 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:49.971752 | orchestrator | 2025-09-06 01:06:49 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:49.973109 | orchestrator | 2025-09-06 01:06:49 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:49.973131 | orchestrator | 2025-09-06 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:53.016611 | orchestrator | 2025-09-06 01:06:53 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:53.017393 | orchestrator | 2025-09-06 01:06:53 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:53.018674 | orchestrator | 2025-09-06 01:06:53 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:53.018699 | orchestrator | 2025-09-06 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:56.064044 | orchestrator | 2025-09-06 01:06:56 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:56.065229 | orchestrator | 2025-09-06 01:06:56 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:56.067080 | orchestrator | 2025-09-06 01:06:56 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:56.067109 | orchestrator | 2025-09-06 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:06:59.112259 | orchestrator | 2025-09-06 01:06:59 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:06:59.113776 | orchestrator | 2025-09-06 01:06:59 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:06:59.115088 | orchestrator | 2025-09-06 01:06:59 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:06:59.115108 | orchestrator | 2025-09-06 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:02.160841 | orchestrator | 2025-09-06 01:07:02 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:02.162387 | orchestrator | 2025-09-06 01:07:02 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:02.163372 | orchestrator | 2025-09-06 01:07:02 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:02.163517 | orchestrator | 2025-09-06 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:05.211975 | orchestrator | 2025-09-06 01:07:05 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:05.213591 | orchestrator | 2025-09-06 01:07:05 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:05.216034 | orchestrator | 2025-09-06 01:07:05 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:05.216082 | orchestrator | 2025-09-06 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:08.261301 | orchestrator | 2025-09-06 01:07:08 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:08.261404 | orchestrator | 2025-09-06 01:07:08 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:08.263208 | orchestrator | 2025-09-06 01:07:08 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:08.263235 | orchestrator | 2025-09-06 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:11.304024 | orchestrator | 2025-09-06 01:07:11 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:11.305714 | orchestrator | 2025-09-06 01:07:11 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:11.307808 | orchestrator | 2025-09-06 01:07:11 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:11.308066 | orchestrator | 2025-09-06 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:14.347837 | orchestrator | 2025-09-06 01:07:14 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:14.349665 | orchestrator | 2025-09-06 01:07:14 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:14.352856 | orchestrator | 2025-09-06 01:07:14 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:14.352881 | orchestrator | 2025-09-06 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:17.396661 | orchestrator | 2025-09-06 01:07:17 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state STARTED 2025-09-06 01:07:17.397735 | orchestrator | 2025-09-06 01:07:17 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:17.399159 | orchestrator | 2025-09-06 01:07:17 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:17.399200 | orchestrator | 2025-09-06 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:20.445393 | orchestrator | 2025-09-06 01:07:20 | INFO  | Task e2e49a60-b99f-4235-9f88-e3a9f4423f05 is in state SUCCESS 2025-09-06 01:07:20.446325 | orchestrator | 2025-09-06 01:07:20.446363 | orchestrator | 2025-09-06 01:07:20.446377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:07:20.446389 | orchestrator | 2025-09-06 01:07:20.446401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:07:20.446412 | orchestrator | Saturday 06 September 2025 01:05:00 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-06 01:07:20.446423 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:07:20.446436 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:07:20.446447 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:07:20.446457 | orchestrator | 2025-09-06 01:07:20.446469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:07:20.446480 | orchestrator | Saturday 06 September 2025 01:05:00 +0000 (0:00:00.288) 0:00:00.549 **** 2025-09-06 01:07:20.446522 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-06 01:07:20.446536 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-06 01:07:20.446679 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-06 01:07:20.447118 | orchestrator | 2025-09-06 01:07:20.447134 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-06 01:07:20.447145 | orchestrator | 2025-09-06 01:07:20.447156 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-06 01:07:20.447167 | orchestrator | Saturday 06 September 2025 01:05:00 +0000 (0:00:00.415) 0:00:00.964 **** 2025-09-06 01:07:20.447178 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:07:20.447190 | orchestrator | 2025-09-06 01:07:20.447201 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-06 01:07:20.447257 | orchestrator | Saturday 06 September 2025 01:05:01 +0000 (0:00:00.523) 0:00:01.488 **** 2025-09-06 01:07:20.447273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.447306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.447319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.447789 | orchestrator | 2025-09-06 01:07:20.447809 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-06 01:07:20.447820 | orchestrator | Saturday 06 September 2025 01:05:02 +0000 (0:00:00.827) 0:00:02.315 **** 2025-09-06 01:07:20.447831 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-06 01:07:20.447843 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-06 01:07:20.447854 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:07:20.447865 | orchestrator | 2025-09-06 01:07:20.447875 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-06 01:07:20.447886 | orchestrator | Saturday 06 September 2025 01:05:02 +0000 (0:00:00.819) 0:00:03.135 **** 2025-09-06 01:07:20.447897 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:07:20.447908 | orchestrator | 2025-09-06 01:07:20.447919 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-06 01:07:20.447930 | orchestrator | Saturday 06 September 2025 01:05:03 +0000 (0:00:00.689) 0:00:03.825 **** 2025-09-06 01:07:20.447985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448023 | orchestrator | 2025-09-06 01:07:20.448034 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-06 01:07:20.448053 | orchestrator | Saturday 06 September 2025 01:05:04 +0000 (0:00:01.408) 0:00:05.233 **** 2025-09-06 01:07:20.448086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448110 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.448122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448134 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.448178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448192 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.448203 | orchestrator | 2025-09-06 01:07:20.448214 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-06 01:07:20.448225 | orchestrator | Saturday 06 September 2025 01:05:05 +0000 (0:00:00.402) 0:00:05.636 **** 2025-09-06 01:07:20.448236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448259 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.448270 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.448287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-06 01:07:20.448308 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.448319 | orchestrator | 2025-09-06 01:07:20.448330 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-06 01:07:20.448341 | orchestrator | Saturday 06 September 2025 01:05:06 +0000 (0:00:01.013) 0:00:06.649 **** 2025-09-06 01:07:20.448352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448422 | orchestrator | 2025-09-06 01:07:20.448437 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-06 01:07:20.448450 | orchestrator | Saturday 06 September 2025 01:05:07 +0000 (0:00:01.303) 0:00:07.952 **** 2025-09-06 01:07:20.448464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.448539 | orchestrator | 2025-09-06 01:07:20.448552 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-06 01:07:20.448565 | orchestrator | Saturday 06 September 2025 01:05:09 +0000 (0:00:01.466) 0:00:09.419 **** 2025-09-06 01:07:20.448578 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.448591 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.448603 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.448616 | orchestrator | 2025-09-06 01:07:20.448629 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-06 01:07:20.448643 | orchestrator | Saturday 06 September 2025 01:05:09 +0000 (0:00:00.519) 0:00:09.938 **** 2025-09-06 01:07:20.448655 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-06 01:07:20.448669 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-06 01:07:20.448682 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-06 01:07:20.448695 | orchestrator | 2025-09-06 01:07:20.448708 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-06 01:07:20.448722 | orchestrator | Saturday 06 September 2025 01:05:11 +0000 (0:00:01.392) 0:00:11.331 **** 2025-09-06 01:07:20.448735 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-06 01:07:20.448748 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-06 01:07:20.448759 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-06 01:07:20.448770 | orchestrator | 2025-09-06 01:07:20.448780 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-06 01:07:20.448791 | orchestrator | Saturday 06 September 2025 01:05:12 +0000 (0:00:01.640) 0:00:12.972 **** 2025-09-06 01:07:20.448832 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-06 01:07:20.448845 | orchestrator | 2025-09-06 01:07:20.448856 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-06 01:07:20.448866 | orchestrator | Saturday 06 September 2025 01:05:14 +0000 (0:00:01.326) 0:00:14.299 **** 2025-09-06 01:07:20.448877 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-06 01:07:20.448888 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-06 01:07:20.448899 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:07:20.448910 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:07:20.448921 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:07:20.448932 | orchestrator | 2025-09-06 01:07:20.448943 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-06 01:07:20.448961 | orchestrator | Saturday 06 September 2025 01:05:15 +0000 (0:00:01.067) 0:00:15.366 **** 2025-09-06 01:07:20.448972 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.448983 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.448994 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.449004 | orchestrator | 2025-09-06 01:07:20.449015 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-06 01:07:20.449026 | orchestrator | Saturday 06 September 2025 01:05:15 +0000 (0:00:00.485) 0:00:15.852 **** 2025-09-06 01:07:20.449038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090069, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8106437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090069, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8106437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090069, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8106437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090136, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.82269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090136, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.82269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090136, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.82269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090093, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8129852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090093, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8129852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090093, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8129852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090143, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8253188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090143, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8253188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090143, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8253188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090107, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8173187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090107, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8173187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090107, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8173187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090128, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090128, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090128, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090067, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8077378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090067, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8077378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090067, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8077378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090082, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8113632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090082, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8113632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090082, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8113632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090095, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8133185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090095, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8133185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090095, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8133185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090114, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8193188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090114, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8193188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090114, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8193188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090135, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090135, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090135, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090088, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8123186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090088, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8123186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090088, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8123186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090124, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090124, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090124, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.820786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090111, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8183186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090111, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8183186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090111, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8183186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090101, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8153186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090101, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8153186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090101, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8153186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.449990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090100, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8143663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090100, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8143663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090100, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8143663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090119, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8203187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090119, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8203187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090119, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8203187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090098, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8136451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090098, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8136451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090098, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8136451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090130, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090130, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090130, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8217745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090316, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8603194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090316, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8603194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090316, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8603194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090177, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8377297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090177, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8377297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090177, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8377297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090157, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8291037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090157, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8291037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090157, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8291037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090217, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090217, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090217, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090151, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8261485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090151, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8261485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090151, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8261485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090265, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8528068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090265, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8528068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090265, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8528068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090219, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8492186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090219, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8492186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090272, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8533192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090219, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8492186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090308, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8591654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090272, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8533192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090272, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8533192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090308, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8591654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090257, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.851319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090308, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8591654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090257, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.851319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090210, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.839319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090257, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.851319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090210, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.839319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090169, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8313189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090210, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.839319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090169, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8313189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090205, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.838319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090169, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8313189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090205, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.838319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090162, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8305252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090205, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.838319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090162, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8305252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090215, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090162, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8305252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090215, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090295, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8573194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090215, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8397365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090295, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8573194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090281, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8558624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090295, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8573194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.450990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090281, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8558624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090152, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8263187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090281, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8558624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090153, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.827319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090152, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8263187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090152, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8263187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090153, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.827319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090247, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8510404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090153, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.827319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090247, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8510404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090278, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.854319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090247, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.8510404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090278, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.854319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090278, 'dev': 90, 'nlink': 1, 'atime': 1757116932.0, 'mtime': 1757116932.0, 'ctime': 1757117710.854319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-06 01:07:20.451193 | orchestrator | 2025-09-06 01:07:20.451204 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-06 01:07:20.451215 | orchestrator | Saturday 06 September 2025 01:05:54 +0000 (0:00:38.505) 0:00:54.357 **** 2025-09-06 01:07:20.451226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.451244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.451256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-06 01:07:20.451267 | orchestrator | 2025-09-06 01:07:20.451278 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-06 01:07:20.451289 | orchestrator | Saturday 06 September 2025 01:05:55 +0000 (0:00:01.030) 0:00:55.388 **** 2025-09-06 01:07:20.451300 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:07:20.451311 | orchestrator | 2025-09-06 01:07:20.451322 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-06 01:07:20.451333 | orchestrator | Saturday 06 September 2025 01:05:57 +0000 (0:00:02.374) 0:00:57.762 **** 2025-09-06 01:07:20.451343 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:07:20.451354 | orchestrator | 2025-09-06 01:07:20.451365 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-06 01:07:20.451375 | orchestrator | Saturday 06 September 2025 01:05:59 +0000 (0:00:02.338) 0:01:00.100 **** 2025-09-06 01:07:20.451386 | orchestrator | 2025-09-06 01:07:20.451397 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-06 01:07:20.451408 | orchestrator | Saturday 06 September 2025 01:05:59 +0000 (0:00:00.070) 0:01:00.171 **** 2025-09-06 01:07:20.451418 | orchestrator | 2025-09-06 01:07:20.451434 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-06 01:07:20.451446 | orchestrator | Saturday 06 September 2025 01:05:59 +0000 (0:00:00.066) 0:01:00.237 **** 2025-09-06 01:07:20.451456 | orchestrator | 2025-09-06 01:07:20.451467 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-06 01:07:20.451484 | orchestrator | Saturday 06 September 2025 01:06:00 +0000 (0:00:00.262) 0:01:00.499 **** 2025-09-06 01:07:20.451516 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.451527 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.451538 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:07:20.451549 | orchestrator | 2025-09-06 01:07:20.451559 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-06 01:07:20.451570 | orchestrator | Saturday 06 September 2025 01:06:02 +0000 (0:00:01.740) 0:01:02.240 **** 2025-09-06 01:07:20.451581 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.451592 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.451603 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-06 01:07:20.451614 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-06 01:07:20.451625 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-06 01:07:20.451636 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:07:20.451647 | orchestrator | 2025-09-06 01:07:20.451658 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-06 01:07:20.451669 | orchestrator | Saturday 06 September 2025 01:06:40 +0000 (0:00:38.646) 0:01:40.886 **** 2025-09-06 01:07:20.451679 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.451690 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:07:20.451701 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:07:20.451711 | orchestrator | 2025-09-06 01:07:20.451722 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-06 01:07:20.451733 | orchestrator | Saturday 06 September 2025 01:07:13 +0000 (0:00:32.657) 0:02:13.543 **** 2025-09-06 01:07:20.451744 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:07:20.451755 | orchestrator | 2025-09-06 01:07:20.451765 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-06 01:07:20.451776 | orchestrator | Saturday 06 September 2025 01:07:15 +0000 (0:00:02.190) 0:02:15.734 **** 2025-09-06 01:07:20.451787 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.451798 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:07:20.451809 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:07:20.451819 | orchestrator | 2025-09-06 01:07:20.451830 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-06 01:07:20.451841 | orchestrator | Saturday 06 September 2025 01:07:15 +0000 (0:00:00.500) 0:02:16.235 **** 2025-09-06 01:07:20.451858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-06 01:07:20.451870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-06 01:07:20.451882 | orchestrator | 2025-09-06 01:07:20.451893 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-06 01:07:20.451904 | orchestrator | Saturday 06 September 2025 01:07:18 +0000 (0:00:02.489) 0:02:18.724 **** 2025-09-06 01:07:20.451914 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:07:20.451925 | orchestrator | 2025-09-06 01:07:20.451936 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:07:20.451948 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:07:20.451959 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:07:20.451976 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:07:20.451987 | orchestrator | 2025-09-06 01:07:20.451998 | orchestrator | 2025-09-06 01:07:20.452009 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:07:20.452020 | orchestrator | Saturday 06 September 2025 01:07:18 +0000 (0:00:00.265) 0:02:18.990 **** 2025-09-06 01:07:20.452031 | orchestrator | =============================================================================== 2025-09-06 01:07:20.452041 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.65s 2025-09-06 01:07:20.452052 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.51s 2025-09-06 01:07:20.452063 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.66s 2025-09-06 01:07:20.452074 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2025-09-06 01:07:20.452085 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2025-09-06 01:07:20.452095 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-09-06 01:07:20.452111 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.19s 2025-09-06 01:07:20.452123 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.74s 2025-09-06 01:07:20.452134 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.64s 2025-09-06 01:07:20.452145 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2025-09-06 01:07:20.452156 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2025-09-06 01:07:20.452166 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2025-09-06 01:07:20.452177 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.33s 2025-09-06 01:07:20.452188 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2025-09-06 01:07:20.452199 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.07s 2025-09-06 01:07:20.452210 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2025-09-06 01:07:20.452220 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.01s 2025-09-06 01:07:20.452231 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.83s 2025-09-06 01:07:20.452242 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2025-09-06 01:07:20.452253 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-09-06 01:07:20.452264 | orchestrator | 2025-09-06 01:07:20 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:20.452275 | orchestrator | 2025-09-06 01:07:20 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:20.452286 | orchestrator | 2025-09-06 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:23.497821 | orchestrator | 2025-09-06 01:07:23 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:23.498864 | orchestrator | 2025-09-06 01:07:23 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:23.498896 | orchestrator | 2025-09-06 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:26.541050 | orchestrator | 2025-09-06 01:07:26 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:26.542739 | orchestrator | 2025-09-06 01:07:26 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:26.542774 | orchestrator | 2025-09-06 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:29.591926 | orchestrator | 2025-09-06 01:07:29 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:29.592529 | orchestrator | 2025-09-06 01:07:29 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:29.592560 | orchestrator | 2025-09-06 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:32.640066 | orchestrator | 2025-09-06 01:07:32 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:32.641666 | orchestrator | 2025-09-06 01:07:32 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:32.641700 | orchestrator | 2025-09-06 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:35.691449 | orchestrator | 2025-09-06 01:07:35 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:35.693913 | orchestrator | 2025-09-06 01:07:35 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:35.694401 | orchestrator | 2025-09-06 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:38.740045 | orchestrator | 2025-09-06 01:07:38 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:38.741090 | orchestrator | 2025-09-06 01:07:38 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:38.741336 | orchestrator | 2025-09-06 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:41.782818 | orchestrator | 2025-09-06 01:07:41 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:41.784247 | orchestrator | 2025-09-06 01:07:41 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:41.784528 | orchestrator | 2025-09-06 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:44.828747 | orchestrator | 2025-09-06 01:07:44 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:44.829786 | orchestrator | 2025-09-06 01:07:44 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:44.829817 | orchestrator | 2025-09-06 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:47.863121 | orchestrator | 2025-09-06 01:07:47 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:47.863237 | orchestrator | 2025-09-06 01:07:47 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:47.863253 | orchestrator | 2025-09-06 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:50.912009 | orchestrator | 2025-09-06 01:07:50 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:50.916689 | orchestrator | 2025-09-06 01:07:50 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:50.916730 | orchestrator | 2025-09-06 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:53.956640 | orchestrator | 2025-09-06 01:07:53 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:53.958980 | orchestrator | 2025-09-06 01:07:53 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:53.959061 | orchestrator | 2025-09-06 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:07:57.002004 | orchestrator | 2025-09-06 01:07:57 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:07:57.003628 | orchestrator | 2025-09-06 01:07:57 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:07:57.003687 | orchestrator | 2025-09-06 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:00.050859 | orchestrator | 2025-09-06 01:08:00 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:00.053361 | orchestrator | 2025-09-06 01:08:00 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:00.053395 | orchestrator | 2025-09-06 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:03.101360 | orchestrator | 2025-09-06 01:08:03 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:03.103463 | orchestrator | 2025-09-06 01:08:03 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:03.103755 | orchestrator | 2025-09-06 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:06.149555 | orchestrator | 2025-09-06 01:08:06 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:06.150437 | orchestrator | 2025-09-06 01:08:06 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:06.150472 | orchestrator | 2025-09-06 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:09.189009 | orchestrator | 2025-09-06 01:08:09 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:09.190296 | orchestrator | 2025-09-06 01:08:09 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:09.190327 | orchestrator | 2025-09-06 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:12.232934 | orchestrator | 2025-09-06 01:08:12 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:12.234965 | orchestrator | 2025-09-06 01:08:12 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:12.234997 | orchestrator | 2025-09-06 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:15.279730 | orchestrator | 2025-09-06 01:08:15 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:15.281970 | orchestrator | 2025-09-06 01:08:15 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:15.282128 | orchestrator | 2025-09-06 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:18.333284 | orchestrator | 2025-09-06 01:08:18 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:18.336026 | orchestrator | 2025-09-06 01:08:18 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:18.336699 | orchestrator | 2025-09-06 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:21.380351 | orchestrator | 2025-09-06 01:08:21 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:21.382219 | orchestrator | 2025-09-06 01:08:21 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:21.382273 | orchestrator | 2025-09-06 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:24.430286 | orchestrator | 2025-09-06 01:08:24 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:24.431186 | orchestrator | 2025-09-06 01:08:24 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:24.431209 | orchestrator | 2025-09-06 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:27.483747 | orchestrator | 2025-09-06 01:08:27 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:27.487449 | orchestrator | 2025-09-06 01:08:27 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:27.488276 | orchestrator | 2025-09-06 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:30.524475 | orchestrator | 2025-09-06 01:08:30 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state STARTED 2025-09-06 01:08:30.526303 | orchestrator | 2025-09-06 01:08:30 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:30.526346 | orchestrator | 2025-09-06 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:33.569557 | orchestrator | 2025-09-06 01:08:33 | INFO  | Task b0104131-6163-4654-8be3-d664483dbea6 is in state SUCCESS 2025-09-06 01:08:33.571011 | orchestrator | 2025-09-06 01:08:33 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:33.571389 | orchestrator | 2025-09-06 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:36.608866 | orchestrator | 2025-09-06 01:08:36 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:36.608983 | orchestrator | 2025-09-06 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:39.651841 | orchestrator | 2025-09-06 01:08:39 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:39.651951 | orchestrator | 2025-09-06 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:42.699063 | orchestrator | 2025-09-06 01:08:42 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:42.699179 | orchestrator | 2025-09-06 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:45.744291 | orchestrator | 2025-09-06 01:08:45 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:45.744501 | orchestrator | 2025-09-06 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:48.785425 | orchestrator | 2025-09-06 01:08:48 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:48.785547 | orchestrator | 2025-09-06 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:51.822507 | orchestrator | 2025-09-06 01:08:51 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:51.822613 | orchestrator | 2025-09-06 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:54.866965 | orchestrator | 2025-09-06 01:08:54 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:54.867110 | orchestrator | 2025-09-06 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:08:57.908133 | orchestrator | 2025-09-06 01:08:57 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:08:57.908240 | orchestrator | 2025-09-06 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:00.950748 | orchestrator | 2025-09-06 01:09:00 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:00.950869 | orchestrator | 2025-09-06 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:04.004204 | orchestrator | 2025-09-06 01:09:04 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:04.004364 | orchestrator | 2025-09-06 01:09:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:07.044850 | orchestrator | 2025-09-06 01:09:07 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:07.044958 | orchestrator | 2025-09-06 01:09:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:10.080457 | orchestrator | 2025-09-06 01:09:10 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:10.080563 | orchestrator | 2025-09-06 01:09:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:13.116677 | orchestrator | 2025-09-06 01:09:13 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:13.116790 | orchestrator | 2025-09-06 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:16.165757 | orchestrator | 2025-09-06 01:09:16 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:16.165847 | orchestrator | 2025-09-06 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:19.202342 | orchestrator | 2025-09-06 01:09:19 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:19.202462 | orchestrator | 2025-09-06 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:22.246653 | orchestrator | 2025-09-06 01:09:22 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:22.246754 | orchestrator | 2025-09-06 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:25.287922 | orchestrator | 2025-09-06 01:09:25 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:25.288038 | orchestrator | 2025-09-06 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:28.333747 | orchestrator | 2025-09-06 01:09:28 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:28.333859 | orchestrator | 2025-09-06 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:31.376745 | orchestrator | 2025-09-06 01:09:31 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:31.376846 | orchestrator | 2025-09-06 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:34.423619 | orchestrator | 2025-09-06 01:09:34 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:34.423738 | orchestrator | 2025-09-06 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:37.469052 | orchestrator | 2025-09-06 01:09:37 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:37.469169 | orchestrator | 2025-09-06 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:40.514426 | orchestrator | 2025-09-06 01:09:40 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:40.514531 | orchestrator | 2025-09-06 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:43.564487 | orchestrator | 2025-09-06 01:09:43 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:43.564620 | orchestrator | 2025-09-06 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:46.609069 | orchestrator | 2025-09-06 01:09:46 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:46.609177 | orchestrator | 2025-09-06 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:49.648760 | orchestrator | 2025-09-06 01:09:49 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:49.648860 | orchestrator | 2025-09-06 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:52.698349 | orchestrator | 2025-09-06 01:09:52 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:52.698455 | orchestrator | 2025-09-06 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:55.742804 | orchestrator | 2025-09-06 01:09:55 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:55.742921 | orchestrator | 2025-09-06 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:09:58.790185 | orchestrator | 2025-09-06 01:09:58 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:09:58.790436 | orchestrator | 2025-09-06 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:01.834473 | orchestrator | 2025-09-06 01:10:01 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:01.834588 | orchestrator | 2025-09-06 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:04.875020 | orchestrator | 2025-09-06 01:10:04 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:04.875122 | orchestrator | 2025-09-06 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:07.915892 | orchestrator | 2025-09-06 01:10:07 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:07.915991 | orchestrator | 2025-09-06 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:10.955873 | orchestrator | 2025-09-06 01:10:10 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:10.955959 | orchestrator | 2025-09-06 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:14.002529 | orchestrator | 2025-09-06 01:10:14 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:14.002637 | orchestrator | 2025-09-06 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:17.050399 | orchestrator | 2025-09-06 01:10:17 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:17.050505 | orchestrator | 2025-09-06 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:20.090242 | orchestrator | 2025-09-06 01:10:20 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:20.090350 | orchestrator | 2025-09-06 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:23.132692 | orchestrator | 2025-09-06 01:10:23 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:23.132803 | orchestrator | 2025-09-06 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:26.174952 | orchestrator | 2025-09-06 01:10:26 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:26.175060 | orchestrator | 2025-09-06 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:29.233007 | orchestrator | 2025-09-06 01:10:29 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:29.233131 | orchestrator | 2025-09-06 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:32.279637 | orchestrator | 2025-09-06 01:10:32 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:32.279755 | orchestrator | 2025-09-06 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:35.326265 | orchestrator | 2025-09-06 01:10:35 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:35.326372 | orchestrator | 2025-09-06 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:38.368360 | orchestrator | 2025-09-06 01:10:38 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:38.368475 | orchestrator | 2025-09-06 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:41.413053 | orchestrator | 2025-09-06 01:10:41 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state STARTED 2025-09-06 01:10:41.413262 | orchestrator | 2025-09-06 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-06 01:10:44.463419 | orchestrator | 2025-09-06 01:10:44.464452 | orchestrator | 2025-09-06 01:10:44.464474 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-06 01:10:44.464488 | orchestrator | 2025-09-06 01:10:44.464500 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-06 01:10:44.464512 | orchestrator | Saturday 06 September 2025 01:02:05 +0000 (0:00:00.239) 0:00:00.239 **** 2025-09-06 01:10:44.464523 | orchestrator | changed: [localhost] 2025-09-06 01:10:44.464535 | orchestrator | 2025-09-06 01:10:44.464546 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-06 01:10:44.464557 | orchestrator | Saturday 06 September 2025 01:02:06 +0000 (0:00:01.244) 0:00:01.483 **** 2025-09-06 01:10:44.464568 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-06 01:10:44.464580 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-06 01:10:44.464595 | orchestrator | 2025-09-06 01:10:44.464611 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464623 | orchestrator | 2025-09-06 01:10:44.464633 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464644 | orchestrator | 2025-09-06 01:10:44.464655 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464665 | orchestrator | 2025-09-06 01:10:44.464676 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464687 | orchestrator | 2025-09-06 01:10:44.464697 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464708 | orchestrator | 2025-09-06 01:10:44.464719 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464783 | orchestrator | 2025-09-06 01:10:44.464795 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464806 | orchestrator | 2025-09-06 01:10:44.464817 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-06 01:10:44.464828 | orchestrator | changed: [localhost] 2025-09-06 01:10:44.464839 | orchestrator | 2025-09-06 01:10:44.464850 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-06 01:10:44.464861 | orchestrator | Saturday 06 September 2025 01:08:18 +0000 (0:06:11.789) 0:06:13.272 **** 2025-09-06 01:10:44.464872 | orchestrator | changed: [localhost] 2025-09-06 01:10:44.464883 | orchestrator | 2025-09-06 01:10:44.464895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:10:44.464906 | orchestrator | 2025-09-06 01:10:44.464917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:10:44.464929 | orchestrator | Saturday 06 September 2025 01:08:31 +0000 (0:00:12.901) 0:06:26.174 **** 2025-09-06 01:10:44.464942 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.464955 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.464968 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.464980 | orchestrator | 2025-09-06 01:10:44.464993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:10:44.465005 | orchestrator | Saturday 06 September 2025 01:08:31 +0000 (0:00:00.315) 0:06:26.489 **** 2025-09-06 01:10:44.465018 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-06 01:10:44.465030 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-06 01:10:44.465043 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-06 01:10:44.465055 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-06 01:10:44.465068 | orchestrator | 2025-09-06 01:10:44.465080 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-06 01:10:44.465120 | orchestrator | skipping: no hosts matched 2025-09-06 01:10:44.465133 | orchestrator | 2025-09-06 01:10:44.465173 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:10:44.465187 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:10:44.465203 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:10:44.465218 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:10:44.465229 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-06 01:10:44.465239 | orchestrator | 2025-09-06 01:10:44.465250 | orchestrator | 2025-09-06 01:10:44.465261 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:10:44.465272 | orchestrator | Saturday 06 September 2025 01:08:32 +0000 (0:00:00.599) 0:06:27.089 **** 2025-09-06 01:10:44.465283 | orchestrator | =============================================================================== 2025-09-06 01:10:44.465293 | orchestrator | Download ironic-agent initramfs --------------------------------------- 371.79s 2025-09-06 01:10:44.465304 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.90s 2025-09-06 01:10:44.465315 | orchestrator | Ensure the destination directory exists --------------------------------- 1.24s 2025-09-06 01:10:44.465325 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-09-06 01:10:44.465336 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-06 01:10:44.465346 | orchestrator | 2025-09-06 01:10:44.465358 | orchestrator | 2025-09-06 01:10:44.465369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-06 01:10:44.465380 | orchestrator | 2025-09-06 01:10:44.465390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-06 01:10:44.465401 | orchestrator | Saturday 06 September 2025 01:05:49 +0000 (0:00:00.200) 0:00:00.200 **** 2025-09-06 01:10:44.465411 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.465422 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.465449 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.465460 | orchestrator | 2025-09-06 01:10:44.465533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-06 01:10:44.465547 | orchestrator | Saturday 06 September 2025 01:05:49 +0000 (0:00:00.250) 0:00:00.451 **** 2025-09-06 01:10:44.465558 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-06 01:10:44.465570 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-06 01:10:44.465580 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-06 01:10:44.465591 | orchestrator | 2025-09-06 01:10:44.465602 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-06 01:10:44.465613 | orchestrator | 2025-09-06 01:10:44.465623 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.465634 | orchestrator | Saturday 06 September 2025 01:05:50 +0000 (0:00:00.309) 0:00:00.760 **** 2025-09-06 01:10:44.465645 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:10:44.465656 | orchestrator | 2025-09-06 01:10:44.465667 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-06 01:10:44.465678 | orchestrator | Saturday 06 September 2025 01:05:50 +0000 (0:00:00.509) 0:00:01.270 **** 2025-09-06 01:10:44.465688 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-06 01:10:44.465699 | orchestrator | 2025-09-06 01:10:44.465710 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-06 01:10:44.465720 | orchestrator | Saturday 06 September 2025 01:05:54 +0000 (0:00:03.506) 0:00:04.776 **** 2025-09-06 01:10:44.465740 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-06 01:10:44.465751 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-06 01:10:44.465762 | orchestrator | 2025-09-06 01:10:44.465773 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-06 01:10:44.465784 | orchestrator | Saturday 06 September 2025 01:06:00 +0000 (0:00:06.765) 0:00:11.541 **** 2025-09-06 01:10:44.465794 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-06 01:10:44.465805 | orchestrator | 2025-09-06 01:10:44.465816 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-06 01:10:44.465826 | orchestrator | Saturday 06 September 2025 01:06:04 +0000 (0:00:03.510) 0:00:15.052 **** 2025-09-06 01:10:44.465837 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-06 01:10:44.465848 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-06 01:10:44.465859 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-06 01:10:44.465869 | orchestrator | 2025-09-06 01:10:44.465880 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-06 01:10:44.465891 | orchestrator | Saturday 06 September 2025 01:06:12 +0000 (0:00:08.528) 0:00:23.580 **** 2025-09-06 01:10:44.465902 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-06 01:10:44.465912 | orchestrator | 2025-09-06 01:10:44.465923 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-06 01:10:44.465934 | orchestrator | Saturday 06 September 2025 01:06:16 +0000 (0:00:03.491) 0:00:27.072 **** 2025-09-06 01:10:44.465944 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-06 01:10:44.465955 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-06 01:10:44.465965 | orchestrator | 2025-09-06 01:10:44.465976 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-06 01:10:44.465987 | orchestrator | Saturday 06 September 2025 01:06:23 +0000 (0:00:07.634) 0:00:34.706 **** 2025-09-06 01:10:44.466003 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-06 01:10:44.466014 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-06 01:10:44.466075 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-06 01:10:44.466086 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-06 01:10:44.466097 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-06 01:10:44.466108 | orchestrator | 2025-09-06 01:10:44.466119 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.466129 | orchestrator | Saturday 06 September 2025 01:06:40 +0000 (0:00:16.302) 0:00:51.009 **** 2025-09-06 01:10:44.466206 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:10:44.466220 | orchestrator | 2025-09-06 01:10:44.466231 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-06 01:10:44.466242 | orchestrator | Saturday 06 September 2025 01:06:40 +0000 (0:00:00.574) 0:00:51.583 **** 2025-09-06 01:10:44.466253 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466263 | orchestrator | 2025-09-06 01:10:44.466274 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-06 01:10:44.466285 | orchestrator | Saturday 06 September 2025 01:06:45 +0000 (0:00:04.400) 0:00:55.984 **** 2025-09-06 01:10:44.466295 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466306 | orchestrator | 2025-09-06 01:10:44.466317 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-06 01:10:44.466328 | orchestrator | Saturday 06 September 2025 01:06:49 +0000 (0:00:04.088) 0:01:00.072 **** 2025-09-06 01:10:44.466338 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.466349 | orchestrator | 2025-09-06 01:10:44.466369 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-06 01:10:44.466380 | orchestrator | Saturday 06 September 2025 01:06:52 +0000 (0:00:03.169) 0:01:03.242 **** 2025-09-06 01:10:44.466391 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-06 01:10:44.466402 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-06 01:10:44.466413 | orchestrator | 2025-09-06 01:10:44.466465 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-06 01:10:44.466478 | orchestrator | Saturday 06 September 2025 01:07:02 +0000 (0:00:09.694) 0:01:12.936 **** 2025-09-06 01:10:44.466489 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-06 01:10:44.466500 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-06 01:10:44.466512 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-06 01:10:44.466522 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-06 01:10:44.466533 | orchestrator | 2025-09-06 01:10:44.466544 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-06 01:10:44.466555 | orchestrator | Saturday 06 September 2025 01:07:19 +0000 (0:00:17.047) 0:01:29.984 **** 2025-09-06 01:10:44.466566 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466576 | orchestrator | 2025-09-06 01:10:44.466587 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-06 01:10:44.466598 | orchestrator | Saturday 06 September 2025 01:07:23 +0000 (0:00:04.585) 0:01:34.570 **** 2025-09-06 01:10:44.466608 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466619 | orchestrator | 2025-09-06 01:10:44.466630 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-06 01:10:44.466640 | orchestrator | Saturday 06 September 2025 01:07:29 +0000 (0:00:05.667) 0:01:40.237 **** 2025-09-06 01:10:44.466651 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.466662 | orchestrator | 2025-09-06 01:10:44.466672 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-06 01:10:44.466683 | orchestrator | Saturday 06 September 2025 01:07:29 +0000 (0:00:00.215) 0:01:40.453 **** 2025-09-06 01:10:44.466694 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466704 | orchestrator | 2025-09-06 01:10:44.466714 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.466723 | orchestrator | Saturday 06 September 2025 01:07:35 +0000 (0:00:05.314) 0:01:45.767 **** 2025-09-06 01:10:44.466733 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:10:44.466742 | orchestrator | 2025-09-06 01:10:44.466752 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-06 01:10:44.466761 | orchestrator | Saturday 06 September 2025 01:07:36 +0000 (0:00:01.002) 0:01:46.769 **** 2025-09-06 01:10:44.466771 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.466780 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.466790 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466799 | orchestrator | 2025-09-06 01:10:44.466809 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-06 01:10:44.466818 | orchestrator | Saturday 06 September 2025 01:07:41 +0000 (0:00:05.048) 0:01:51.818 **** 2025-09-06 01:10:44.466828 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.466874 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466885 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.466894 | orchestrator | 2025-09-06 01:10:44.466904 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-06 01:10:44.466913 | orchestrator | Saturday 06 September 2025 01:07:46 +0000 (0:00:04.980) 0:01:56.798 **** 2025-09-06 01:10:44.466929 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.466939 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.466948 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.466958 | orchestrator | 2025-09-06 01:10:44.466967 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-06 01:10:44.466977 | orchestrator | Saturday 06 September 2025 01:07:46 +0000 (0:00:00.875) 0:01:57.674 **** 2025-09-06 01:10:44.466986 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.466996 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467005 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.467015 | orchestrator | 2025-09-06 01:10:44.467024 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-06 01:10:44.467034 | orchestrator | Saturday 06 September 2025 01:07:49 +0000 (0:00:02.428) 0:02:00.102 **** 2025-09-06 01:10:44.467043 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.467053 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.467062 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.467071 | orchestrator | 2025-09-06 01:10:44.467081 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-06 01:10:44.467090 | orchestrator | Saturday 06 September 2025 01:07:50 +0000 (0:00:01.340) 0:02:01.443 **** 2025-09-06 01:10:44.467100 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.467109 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.467119 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.467128 | orchestrator | 2025-09-06 01:10:44.467155 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-06 01:10:44.467174 | orchestrator | Saturday 06 September 2025 01:07:52 +0000 (0:00:01.323) 0:02:02.766 **** 2025-09-06 01:10:44.467190 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.467200 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.467210 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.467219 | orchestrator | 2025-09-06 01:10:44.467229 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-06 01:10:44.467238 | orchestrator | Saturday 06 September 2025 01:07:54 +0000 (0:00:02.072) 0:02:04.839 **** 2025-09-06 01:10:44.467248 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.467257 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.467267 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.467276 | orchestrator | 2025-09-06 01:10:44.467285 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-06 01:10:44.467334 | orchestrator | Saturday 06 September 2025 01:07:55 +0000 (0:00:01.620) 0:02:06.459 **** 2025-09-06 01:10:44.467346 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467356 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.467365 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.467375 | orchestrator | 2025-09-06 01:10:44.467384 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-06 01:10:44.467394 | orchestrator | Saturday 06 September 2025 01:07:56 +0000 (0:00:00.900) 0:02:07.360 **** 2025-09-06 01:10:44.467403 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.467413 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.467422 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467432 | orchestrator | 2025-09-06 01:10:44.467441 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.467451 | orchestrator | Saturday 06 September 2025 01:07:59 +0000 (0:00:02.756) 0:02:10.116 **** 2025-09-06 01:10:44.467461 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:10:44.467470 | orchestrator | 2025-09-06 01:10:44.467480 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-06 01:10:44.467490 | orchestrator | Saturday 06 September 2025 01:07:59 +0000 (0:00:00.556) 0:02:10.673 **** 2025-09-06 01:10:44.467500 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467509 | orchestrator | 2025-09-06 01:10:44.467525 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-06 01:10:44.467535 | orchestrator | Saturday 06 September 2025 01:08:03 +0000 (0:00:03.864) 0:02:14.537 **** 2025-09-06 01:10:44.467544 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467554 | orchestrator | 2025-09-06 01:10:44.467563 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-06 01:10:44.467572 | orchestrator | Saturday 06 September 2025 01:08:06 +0000 (0:00:03.105) 0:02:17.643 **** 2025-09-06 01:10:44.467582 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-06 01:10:44.467592 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-06 01:10:44.467601 | orchestrator | 2025-09-06 01:10:44.467611 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-06 01:10:44.467621 | orchestrator | Saturday 06 September 2025 01:08:13 +0000 (0:00:06.924) 0:02:24.568 **** 2025-09-06 01:10:44.467630 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467640 | orchestrator | 2025-09-06 01:10:44.467649 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-06 01:10:44.467659 | orchestrator | Saturday 06 September 2025 01:08:17 +0000 (0:00:03.540) 0:02:28.108 **** 2025-09-06 01:10:44.467669 | orchestrator | ok: [testbed-node-0] 2025-09-06 01:10:44.467678 | orchestrator | ok: [testbed-node-1] 2025-09-06 01:10:44.467688 | orchestrator | ok: [testbed-node-2] 2025-09-06 01:10:44.467697 | orchestrator | 2025-09-06 01:10:44.467707 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-06 01:10:44.467716 | orchestrator | Saturday 06 September 2025 01:08:17 +0000 (0:00:00.338) 0:02:28.447 **** 2025-09-06 01:10:44.467729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.467744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.467790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.467809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.467821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.467831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.467842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.467972 | orchestrator | 2025-09-06 01:10:44.467983 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-06 01:10:44.467993 | orchestrator | Saturday 06 September 2025 01:08:20 +0000 (0:00:02.666) 0:02:31.113 **** 2025-09-06 01:10:44.468003 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.468012 | orchestrator | 2025-09-06 01:10:44.468022 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-06 01:10:44.468039 | orchestrator | Saturday 06 September 2025 01:08:20 +0000 (0:00:00.138) 0:02:31.252 **** 2025-09-06 01:10:44.468049 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.468058 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:10:44.468068 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:10:44.468078 | orchestrator | 2025-09-06 01:10:44.468091 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-06 01:10:44.468127 | orchestrator | Saturday 06 September 2025 01:08:21 +0000 (0:00:00.513) 0:02:31.766 **** 2025-09-06 01:10:44.468188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468244 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.468294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468355 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:10:44.468365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager',2025-09-06 01:10:44 | INFO  | Task 807884c1-8978-4059-b21e-ac900bf716c9 is in state SUCCESS 2025-09-06 01:10:44.468437 | orchestrator | 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468467 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:10:44.468477 | orchestrator | 2025-09-06 01:10:44.468486 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.468496 | orchestrator | Saturday 06 September 2025 01:08:21 +0000 (0:00:00.675) 0:02:32.441 **** 2025-09-06 01:10:44.468506 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-06 01:10:44.468515 | orchestrator | 2025-09-06 01:10:44.468525 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-06 01:10:44.468534 | orchestrator | Saturday 06 September 2025 01:08:22 +0000 (0:00:00.563) 0:02:33.004 **** 2025-09-06 01:10:44.468544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.468590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.468603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.468613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.468623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.468633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.468644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.468770 | orchestrator | 2025-09-06 01:10:44.468778 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-06 01:10:44.468786 | orchestrator | Saturday 06 September 2025 01:08:27 +0000 (0:00:05.514) 0:02:38.519 **** 2025-09-06 01:10:44.468804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468874 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.468893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468940 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:10:44.468948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.468957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.468974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.468991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.468999 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:10:44.469007 | orchestrator | 2025-09-06 01:10:44.469015 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-06 01:10:44.469028 | orchestrator | Saturday 06 September 2025 01:08:28 +0000 (0:00:00.895) 0:02:39.415 **** 2025-09-06 01:10:44.469037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.469045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.469053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.469091 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.469099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.469112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.469121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.469171 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:10:44.469180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-06 01:10:44.469188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-06 01:10:44.469204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-06 01:10:44.469221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-06 01:10:44.469229 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:10:44.469237 | orchestrator | 2025-09-06 01:10:44.469245 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-06 01:10:44.469253 | orchestrator | Saturday 06 September 2025 01:08:29 +0000 (0:00:00.983) 0:02:40.399 **** 2025-09-06 01:10:44.469270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469428 | orchestrator | 2025-09-06 01:10:44.469437 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-06 01:10:44.469445 | orchestrator | Saturday 06 September 2025 01:08:35 +0000 (0:00:05.569) 0:02:45.969 **** 2025-09-06 01:10:44.469453 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-06 01:10:44.469461 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-06 01:10:44.469469 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-06 01:10:44.469476 | orchestrator | 2025-09-06 01:10:44.469484 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-06 01:10:44.469492 | orchestrator | Saturday 06 September 2025 01:08:37 +0000 (0:00:02.141) 0:02:48.111 **** 2025-09-06 01:10:44.469501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.469540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.469565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.469652 | orchestrator | 2025-09-06 01:10:44.469660 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-06 01:10:44.469668 | orchestrator | Saturday 06 September 2025 01:08:53 +0000 (0:00:16.379) 0:03:04.490 **** 2025-09-06 01:10:44.469676 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.469684 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.469692 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.469699 | orchestrator | 2025-09-06 01:10:44.469707 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-06 01:10:44.469715 | orchestrator | Saturday 06 September 2025 01:08:55 +0000 (0:00:01.571) 0:03:06.061 **** 2025-09-06 01:10:44.469723 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469736 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469743 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469751 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469759 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469773 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469785 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469794 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469802 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469810 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469818 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469825 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469833 | orchestrator | 2025-09-06 01:10:44.469841 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-06 01:10:44.469849 | orchestrator | Saturday 06 September 2025 01:09:00 +0000 (0:00:05.438) 0:03:11.500 **** 2025-09-06 01:10:44.469857 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469865 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469873 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469881 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469889 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469896 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.469904 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469912 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469920 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.469928 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469935 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469943 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-06 01:10:44.469951 | orchestrator | 2025-09-06 01:10:44.469959 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-06 01:10:44.469967 | orchestrator | Saturday 06 September 2025 01:09:06 +0000 (0:00:05.438) 0:03:16.938 **** 2025-09-06 01:10:44.469974 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469982 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469990 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-06 01:10:44.469998 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.470006 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.470014 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-06 01:10:44.470069 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.470078 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.470086 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-06 01:10:44.470093 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-06 01:10:44.470101 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-06 01:10:44.470109 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-06 01:10:44.470116 | orchestrator | 2025-09-06 01:10:44.470124 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-06 01:10:44.470133 | orchestrator | Saturday 06 September 2025 01:09:11 +0000 (0:00:05.732) 0:03:22.670 **** 2025-09-06 01:10:44.470162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.470182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.470191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-06 01:10:44.470199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.470208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.470216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-06 01:10:44.470230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-06 01:10:44.470319 | orchestrator | 2025-09-06 01:10:44.470331 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-06 01:10:44.470343 | orchestrator | Saturday 06 September 2025 01:09:15 +0000 (0:00:03.913) 0:03:26.584 **** 2025-09-06 01:10:44.470351 | orchestrator | skipping: [testbed-node-0] 2025-09-06 01:10:44.470360 | orchestrator | skipping: [testbed-node-1] 2025-09-06 01:10:44.470367 | orchestrator | skipping: [testbed-node-2] 2025-09-06 01:10:44.470375 | orchestrator | 2025-09-06 01:10:44.470383 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-06 01:10:44.470391 | orchestrator | Saturday 06 September 2025 01:09:16 +0000 (0:00:00.321) 0:03:26.906 **** 2025-09-06 01:10:44.470399 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470407 | orchestrator | 2025-09-06 01:10:44.470414 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-06 01:10:44.470422 | orchestrator | Saturday 06 September 2025 01:09:18 +0000 (0:00:02.099) 0:03:29.006 **** 2025-09-06 01:10:44.470430 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470438 | orchestrator | 2025-09-06 01:10:44.470446 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-06 01:10:44.470454 | orchestrator | Saturday 06 September 2025 01:09:20 +0000 (0:00:02.078) 0:03:31.084 **** 2025-09-06 01:10:44.470461 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470469 | orchestrator | 2025-09-06 01:10:44.470477 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-06 01:10:44.470485 | orchestrator | Saturday 06 September 2025 01:09:22 +0000 (0:00:02.276) 0:03:33.361 **** 2025-09-06 01:10:44.470493 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470500 | orchestrator | 2025-09-06 01:10:44.470508 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-06 01:10:44.470516 | orchestrator | Saturday 06 September 2025 01:09:24 +0000 (0:00:02.096) 0:03:35.457 **** 2025-09-06 01:10:44.470524 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470532 | orchestrator | 2025-09-06 01:10:44.470540 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-06 01:10:44.470552 | orchestrator | Saturday 06 September 2025 01:09:46 +0000 (0:00:21.689) 0:03:57.146 **** 2025-09-06 01:10:44.470560 | orchestrator | 2025-09-06 01:10:44.470568 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-06 01:10:44.470576 | orchestrator | Saturday 06 September 2025 01:09:46 +0000 (0:00:00.067) 0:03:57.214 **** 2025-09-06 01:10:44.470584 | orchestrator | 2025-09-06 01:10:44.470592 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-06 01:10:44.470600 | orchestrator | Saturday 06 September 2025 01:09:46 +0000 (0:00:00.066) 0:03:57.280 **** 2025-09-06 01:10:44.470607 | orchestrator | 2025-09-06 01:10:44.470615 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-06 01:10:44.470623 | orchestrator | Saturday 06 September 2025 01:09:46 +0000 (0:00:00.062) 0:03:57.343 **** 2025-09-06 01:10:44.470631 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470639 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.470646 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.470654 | orchestrator | 2025-09-06 01:10:44.470662 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-06 01:10:44.470670 | orchestrator | Saturday 06 September 2025 01:10:04 +0000 (0:00:17.493) 0:04:14.836 **** 2025-09-06 01:10:44.470678 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470686 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.470693 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.470701 | orchestrator | 2025-09-06 01:10:44.470709 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-06 01:10:44.470717 | orchestrator | Saturday 06 September 2025 01:10:15 +0000 (0:00:11.771) 0:04:26.607 **** 2025-09-06 01:10:44.470725 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470732 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.470740 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.470748 | orchestrator | 2025-09-06 01:10:44.470756 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-06 01:10:44.470764 | orchestrator | Saturday 06 September 2025 01:10:26 +0000 (0:00:10.590) 0:04:37.198 **** 2025-09-06 01:10:44.470771 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470779 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.470787 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.470795 | orchestrator | 2025-09-06 01:10:44.470803 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-06 01:10:44.470811 | orchestrator | Saturday 06 September 2025 01:10:37 +0000 (0:00:10.625) 0:04:47.824 **** 2025-09-06 01:10:44.470818 | orchestrator | changed: [testbed-node-0] 2025-09-06 01:10:44.470826 | orchestrator | changed: [testbed-node-2] 2025-09-06 01:10:44.470834 | orchestrator | changed: [testbed-node-1] 2025-09-06 01:10:44.470842 | orchestrator | 2025-09-06 01:10:44.470849 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-06 01:10:44.470857 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-06 01:10:44.470866 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:10:44.470874 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-06 01:10:44.470882 | orchestrator | 2025-09-06 01:10:44.470889 | orchestrator | 2025-09-06 01:10:44.470897 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-06 01:10:44.470905 | orchestrator | Saturday 06 September 2025 01:10:42 +0000 (0:00:05.523) 0:04:53.347 **** 2025-09-06 01:10:44.470913 | orchestrator | =============================================================================== 2025-09-06 01:10:44.470921 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.69s 2025-09-06 01:10:44.470933 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.49s 2025-09-06 01:10:44.470949 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.05s 2025-09-06 01:10:44.470958 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.38s 2025-09-06 01:10:44.470966 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.30s 2025-09-06 01:10:44.470974 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.77s 2025-09-06 01:10:44.470981 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.63s 2025-09-06 01:10:44.470989 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.59s 2025-09-06 01:10:44.470997 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.69s 2025-09-06 01:10:44.471005 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.53s 2025-09-06 01:10:44.471013 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.63s 2025-09-06 01:10:44.471020 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.92s 2025-09-06 01:10:44.471028 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.77s 2025-09-06 01:10:44.471036 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.73s 2025-09-06 01:10:44.471044 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.67s 2025-09-06 01:10:44.471052 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.57s 2025-09-06 01:10:44.471060 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.52s 2025-09-06 01:10:44.471068 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.52s 2025-09-06 01:10:44.471075 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.44s 2025-09-06 01:10:44.471083 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.44s 2025-09-06 01:10:44.471091 | orchestrator | 2025-09-06 01:10:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:10:47.498497 | orchestrator | 2025-09-06 01:10:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:10:50.541073 | orchestrator | 2025-09-06 01:10:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:10:53.584305 | orchestrator | 2025-09-06 01:10:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:10:56.623793 | orchestrator | 2025-09-06 01:10:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:10:59.668075 | orchestrator | 2025-09-06 01:10:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:02.705486 | orchestrator | 2025-09-06 01:11:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:05.749080 | orchestrator | 2025-09-06 01:11:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:08.793076 | orchestrator | 2025-09-06 01:11:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:11.836509 | orchestrator | 2025-09-06 01:11:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:14.883440 | orchestrator | 2025-09-06 01:11:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:17.923408 | orchestrator | 2025-09-06 01:11:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:20.968468 | orchestrator | 2025-09-06 01:11:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:24.005613 | orchestrator | 2025-09-06 01:11:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:27.043422 | orchestrator | 2025-09-06 01:11:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:30.083971 | orchestrator | 2025-09-06 01:11:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:33.126576 | orchestrator | 2025-09-06 01:11:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:36.169780 | orchestrator | 2025-09-06 01:11:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:39.211307 | orchestrator | 2025-09-06 01:11:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:42.254688 | orchestrator | 2025-09-06 01:11:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-06 01:11:45.293657 | orchestrator | 2025-09-06 01:11:45.626864 | orchestrator | 2025-09-06 01:11:45.631479 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 6 01:11:45 UTC 2025 2025-09-06 01:11:45.631511 | orchestrator | 2025-09-06 01:11:46.111489 | orchestrator | ok: Runtime: 0:34:40.873069 2025-09-06 01:11:46.401694 | 2025-09-06 01:11:46.401870 | TASK [Bootstrap services] 2025-09-06 01:11:47.142214 | orchestrator | 2025-09-06 01:11:47.142390 | orchestrator | # BOOTSTRAP 2025-09-06 01:11:47.142412 | orchestrator | 2025-09-06 01:11:47.142427 | orchestrator | + set -e 2025-09-06 01:11:47.142440 | orchestrator | + echo 2025-09-06 01:11:47.142454 | orchestrator | + echo '# BOOTSTRAP' 2025-09-06 01:11:47.142472 | orchestrator | + echo 2025-09-06 01:11:47.142517 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-06 01:11:47.151696 | orchestrator | + set -e 2025-09-06 01:11:47.151745 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-06 01:11:51.044563 | orchestrator | 2025-09-06 01:11:51 | INFO  | It takes a moment until task 3fe33877-5e30-4a94-a71e-c901273eef1f (flavor-manager) has been started and output is visible here. 2025-09-06 01:11:54.505516 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-06 01:11:54.505638 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-06 01:11:54.505662 | orchestrator | │ in run │ 2025-09-06 01:11:54.505675 | orchestrator | │ │ 2025-09-06 01:11:54.505686 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-06 01:11:54.505712 | orchestrator | │ 192 │ │ 2025-09-06 01:11:54.505723 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-06 01:11:54.505736 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-06 01:11:54.505747 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-06 01:11:54.505758 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-06 01:11:54.505768 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-06 01:11:54.505779 | orchestrator | │ │ 2025-09-06 01:11:54.505792 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-06 01:11:54.505815 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-06 01:11:54.505826 | orchestrator | │ │ debug = False │ │ 2025-09-06 01:11:54.505837 | orchestrator | │ │ definitions = { │ │ 2025-09-06 01:11:54.505848 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-06 01:11:54.505859 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-06 01:11:54.505870 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-06 01:11:54.505881 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-06 01:11:54.505892 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-06 01:11:54.505903 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-06 01:11:54.505914 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-06 01:11:54.505925 | orchestrator | │ │ │ ], │ │ 2025-09-06 01:11:54.505936 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-06 01:11:54.505946 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.505957 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-06 01:11:54.505996 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506008 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-06 01:11:54.506090 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.506105 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-06 01:11:54.506117 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.506128 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-06 01:11:54.506138 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-06 01:11:54.506149 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.506160 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.506171 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.506181 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-06 01:11:54.506192 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506203 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-06 01:11:54.506214 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-06 01:11:54.506224 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-06 01:11:54.506253 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.506265 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-06 01:11:54.506276 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-06 01:11:54.506287 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.506297 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.506308 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.506319 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-06 01:11:54.506337 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506348 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-06 01:11:54.506358 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.506369 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.506381 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.506391 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-06 01:11:54.506402 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-06 01:11:54.506413 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.506424 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.506435 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.506446 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-06 01:11:54.506456 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506477 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-06 01:11:54.506488 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-06 01:11:54.506499 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.506510 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.506521 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-06 01:11:54.506532 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-06 01:11:54.506542 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.506553 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.506564 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.506575 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-06 01:11:54.506586 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506596 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.506607 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.506618 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.506629 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.506640 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-06 01:11:54.506650 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-06 01:11:54.506661 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.506672 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.506683 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.506694 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-06 01:11:54.506705 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.506721 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.506732 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-06 01:11:54.506750 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.531128 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.531167 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-06 01:11:54.531179 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-06 01:11:54.531190 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.531201 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.531212 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.531223 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-06 01:11:54.531233 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.531260 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-06 01:11:54.531272 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.531283 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.531294 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.531304 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-06 01:11:54.531315 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-06 01:11:54.531326 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.531337 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.531348 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.531358 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-06 01:11:54.531369 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.531380 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-06 01:11:54.531391 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-06 01:11:54.531401 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.531412 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.531423 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-06 01:11:54.531434 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-06 01:11:54.531444 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.531455 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.531466 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.531477 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-06 01:11:54.531488 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-06 01:11:54.531500 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.531511 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.531522 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.531533 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.531543 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-06 01:11:54.531554 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-06 01:11:54.531565 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.531587 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.531598 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.531609 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-06 01:11:54.531619 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-06 01:11:54.531637 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.531648 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-06 01:11:54.531670 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.531682 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.531693 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-06 01:11:54.531704 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-06 01:11:54.531715 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.531726 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.531737 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-06 01:11:54.531748 | orchestrator | │ │ │ ] │ │ 2025-09-06 01:11:54.531759 | orchestrator | │ │ } │ │ 2025-09-06 01:11:54.531770 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-06 01:11:54.531780 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-06 01:11:54.531791 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-06 01:11:54.531802 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-06 01:11:54.531813 | orchestrator | │ │ name = 'local' │ │ 2025-09-06 01:11:54.531824 | orchestrator | │ │ recommended = True │ │ 2025-09-06 01:11:54.531835 | orchestrator | │ │ url = None │ │ 2025-09-06 01:11:54.531846 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-06 01:11:54.531860 | orchestrator | │ │ 2025-09-06 01:11:54.531871 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-06 01:11:54.531881 | orchestrator | │ in __init__ │ 2025-09-06 01:11:54.531892 | orchestrator | │ │ 2025-09-06 01:11:54.531903 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-06 01:11:54.531914 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-06 01:11:54.531924 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-06 01:11:54.531935 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-06 01:11:54.531945 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-06 01:11:54.531956 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-06 01:11:54.531967 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-06 01:11:54.531977 | orchestrator | │ │ 2025-09-06 01:11:54.531993 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-06 01:11:54.532012 | orchestrator | │ │ cloud = │ │ 2025-09-06 01:11:54.532033 | orchestrator | │ │ definitions = { │ │ 2025-09-06 01:11:54.532044 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-06 01:11:54.532074 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-06 01:11:54.532086 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-06 01:11:54.532097 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-06 01:11:54.532108 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-06 01:11:54.532119 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-06 01:11:54.532130 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-06 01:11:54.532141 | orchestrator | │ │ │ ], │ │ 2025-09-06 01:11:54.532152 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-06 01:11:54.532169 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.559496 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-06 01:11:54.559578 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.559591 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-06 01:11:54.559602 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.559614 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-06 01:11:54.559625 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.559636 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-06 01:11:54.559647 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-06 01:11:54.559660 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.559671 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.559682 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.559692 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-06 01:11:54.559703 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.559714 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-06 01:11:54.559725 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-06 01:11:54.559735 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-06 01:11:54.559746 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.559757 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-06 01:11:54.559768 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-06 01:11:54.559778 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.559808 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.559820 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.559830 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-06 01:11:54.559841 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.559853 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-06 01:11:54.559863 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.559874 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.559885 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.559896 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-06 01:11:54.559907 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-06 01:11:54.559918 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.559929 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.559951 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.559962 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-06 01:11:54.559973 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.559984 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-06 01:11:54.559994 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-06 01:11:54.560005 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.560016 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.560027 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-06 01:11:54.560038 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-06 01:11:54.560049 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.560093 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.560120 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.560132 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-06 01:11:54.560144 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.560155 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.560167 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.560178 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.560189 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.560200 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-06 01:11:54.560211 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-06 01:11:54.560223 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.560241 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.560252 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.560263 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-06 01:11:54.560274 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.560285 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.560296 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-06 01:11:54.560307 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.560319 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.560330 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-06 01:11:54.560341 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-06 01:11:54.560352 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.560363 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.560374 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.560385 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-06 01:11:54.560396 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.560407 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-06 01:11:54.560418 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.560429 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.560444 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.560455 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-06 01:11:54.560466 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-06 01:11:54.560477 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.560488 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.560500 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.560511 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-06 01:11:54.560522 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-06 01:11:54.560533 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-06 01:11:54.560544 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-06 01:11:54.560555 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.560566 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.560577 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-06 01:11:54.560588 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-06 01:11:54.560599 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.560623 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.634434 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.634515 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-06 01:11:54.634553 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-06 01:11:54.634565 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.634576 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-06 01:11:54.634588 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.634598 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.634609 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-06 01:11:54.634620 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-06 01:11:54.634631 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.634642 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.634652 | orchestrator | │ │ │ │ { │ │ 2025-09-06 01:11:54.634663 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-06 01:11:54.634674 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-06 01:11:54.634685 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-06 01:11:54.634695 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-06 01:11:54.634706 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-06 01:11:54.634717 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-06 01:11:54.634728 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-06 01:11:54.634738 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-06 01:11:54.634749 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-06 01:11:54.634760 | orchestrator | │ │ │ │ }, │ │ 2025-09-06 01:11:54.634771 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-06 01:11:54.634781 | orchestrator | │ │ │ ] │ │ 2025-09-06 01:11:54.634792 | orchestrator | │ │ } │ │ 2025-09-06 01:11:54.634803 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-06 01:11:54.634813 | orchestrator | │ │ recommended = True │ │ 2025-09-06 01:11:54.634824 | orchestrator | │ │ self = │ │ 2025-09-06 01:11:54.634846 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-06 01:11:54.634863 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-06 01:11:54.634890 | orchestrator | KeyError: 'recommended' 2025-09-06 01:11:55.446279 | orchestrator | ERROR 2025-09-06 01:11:55.446712 | orchestrator | { 2025-09-06 01:11:55.446827 | orchestrator | "delta": "0:00:08.193806", 2025-09-06 01:11:55.446978 | orchestrator | "end": "2025-09-06 01:11:54.943853", 2025-09-06 01:11:55.447040 | orchestrator | "msg": "non-zero return code", 2025-09-06 01:11:55.447097 | orchestrator | "rc": 1, 2025-09-06 01:11:55.447149 | orchestrator | "start": "2025-09-06 01:11:46.750047" 2025-09-06 01:11:55.447202 | orchestrator | } failure 2025-09-06 01:11:55.468641 | 2025-09-06 01:11:55.468781 | PLAY RECAP 2025-09-06 01:11:55.468861 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-06 01:11:55.468905 | 2025-09-06 01:11:55.698668 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-06 01:11:55.700956 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-06 01:11:56.459514 | 2025-09-06 01:11:56.459705 | PLAY [Post output play] 2025-09-06 01:11:56.475492 | 2025-09-06 01:11:56.475638 | LOOP [stage-output : Register sources] 2025-09-06 01:11:56.541649 | 2025-09-06 01:11:56.541933 | TASK [stage-output : Check sudo] 2025-09-06 01:11:57.360659 | orchestrator | sudo: a password is required 2025-09-06 01:11:57.580473 | orchestrator | ok: Runtime: 0:00:00.018245 2025-09-06 01:11:57.596164 | 2025-09-06 01:11:57.596327 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-06 01:11:57.634866 | 2025-09-06 01:11:57.635133 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-06 01:11:57.722279 | orchestrator | ok 2025-09-06 01:11:57.730658 | 2025-09-06 01:11:57.730779 | LOOP [stage-output : Ensure target folders exist] 2025-09-06 01:11:58.232500 | orchestrator | ok: "docs" 2025-09-06 01:11:58.232873 | 2025-09-06 01:11:58.457533 | orchestrator | ok: "artifacts" 2025-09-06 01:11:58.702723 | orchestrator | ok: "logs" 2025-09-06 01:11:58.723896 | 2025-09-06 01:11:58.724075 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-06 01:11:58.761963 | 2025-09-06 01:11:58.762238 | TASK [stage-output : Make all log files readable] 2025-09-06 01:11:59.045450 | orchestrator | ok 2025-09-06 01:11:59.055536 | 2025-09-06 01:11:59.055739 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-06 01:11:59.090726 | orchestrator | skipping: Conditional result was False 2025-09-06 01:11:59.107629 | 2025-09-06 01:11:59.107827 | TASK [stage-output : Discover log files for compression] 2025-09-06 01:11:59.132744 | orchestrator | skipping: Conditional result was False 2025-09-06 01:11:59.146979 | 2025-09-06 01:11:59.147162 | LOOP [stage-output : Archive everything from logs] 2025-09-06 01:11:59.195325 | 2025-09-06 01:11:59.195523 | PLAY [Post cleanup play] 2025-09-06 01:11:59.204110 | 2025-09-06 01:11:59.204230 | TASK [Set cloud fact (Zuul deployment)] 2025-09-06 01:11:59.259009 | orchestrator | ok 2025-09-06 01:11:59.271163 | 2025-09-06 01:11:59.271278 | TASK [Set cloud fact (local deployment)] 2025-09-06 01:11:59.305399 | orchestrator | skipping: Conditional result was False 2025-09-06 01:11:59.319006 | 2025-09-06 01:11:59.319149 | TASK [Clean the cloud environment] 2025-09-06 01:11:59.879280 | orchestrator | 2025-09-06 01:11:59 - clean up servers 2025-09-06 01:12:00.611542 | orchestrator | 2025-09-06 01:12:00 - testbed-manager 2025-09-06 01:12:00.693745 | orchestrator | 2025-09-06 01:12:00 - testbed-node-3 2025-09-06 01:12:00.779574 | orchestrator | 2025-09-06 01:12:00 - testbed-node-4 2025-09-06 01:12:00.867487 | orchestrator | 2025-09-06 01:12:00 - testbed-node-5 2025-09-06 01:12:00.960283 | orchestrator | 2025-09-06 01:12:00 - testbed-node-2 2025-09-06 01:12:01.044259 | orchestrator | 2025-09-06 01:12:01 - testbed-node-0 2025-09-06 01:12:01.133902 | orchestrator | 2025-09-06 01:12:01 - testbed-node-1 2025-09-06 01:12:01.221719 | orchestrator | 2025-09-06 01:12:01 - clean up keypairs 2025-09-06 01:12:01.240901 | orchestrator | 2025-09-06 01:12:01 - testbed 2025-09-06 01:12:01.265592 | orchestrator | 2025-09-06 01:12:01 - wait for servers to be gone 2025-09-06 01:12:14.222001 | orchestrator | 2025-09-06 01:12:14 - clean up ports 2025-09-06 01:12:14.420206 | orchestrator | 2025-09-06 01:12:14 - 33a5d9a6-879d-45ce-8028-99b05c8c725d 2025-09-06 01:12:14.664290 | orchestrator | 2025-09-06 01:12:14 - aa4f08fa-a974-42ec-8d17-4054f8c5e250 2025-09-06 01:12:14.910138 | orchestrator | 2025-09-06 01:12:14 - ba6c6c74-0301-4ad4-a832-184e93b57e9d 2025-09-06 01:12:15.157695 | orchestrator | 2025-09-06 01:12:15 - bb94ced5-6333-4607-a1da-7015ca5a2f2e 2025-09-06 01:12:15.375720 | orchestrator | 2025-09-06 01:12:15 - d0a36037-87ee-48ea-b088-c386494650a9 2025-09-06 01:12:15.572090 | orchestrator | 2025-09-06 01:12:15 - d65be45a-616d-41a9-9283-76e03366a604 2025-09-06 01:12:15.983186 | orchestrator | 2025-09-06 01:12:15 - fbf932c3-eac6-40d6-a79d-5d3825925213 2025-09-06 01:12:16.697565 | orchestrator | 2025-09-06 01:12:16 - clean up volumes 2025-09-06 01:12:16.816187 | orchestrator | 2025-09-06 01:12:16 - testbed-volume-4-node-base 2025-09-06 01:12:16.856121 | orchestrator | 2025-09-06 01:12:16 - testbed-volume-0-node-base 2025-09-06 01:12:16.894751 | orchestrator | 2025-09-06 01:12:16 - testbed-volume-1-node-base 2025-09-06 01:12:16.937312 | orchestrator | 2025-09-06 01:12:16 - testbed-volume-5-node-base 2025-09-06 01:12:16.978914 | orchestrator | 2025-09-06 01:12:16 - testbed-volume-2-node-base 2025-09-06 01:12:17.020830 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-3-node-base 2025-09-06 01:12:17.062430 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-manager-base 2025-09-06 01:12:17.234391 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-6-node-3 2025-09-06 01:12:17.275756 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-7-node-4 2025-09-06 01:12:17.318404 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-2-node-5 2025-09-06 01:12:17.357367 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-8-node-5 2025-09-06 01:12:17.397935 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-4-node-4 2025-09-06 01:12:17.435887 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-1-node-4 2025-09-06 01:12:17.472265 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-5-node-5 2025-09-06 01:12:17.517342 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-0-node-3 2025-09-06 01:12:17.554806 | orchestrator | 2025-09-06 01:12:17 - testbed-volume-3-node-3 2025-09-06 01:12:17.593389 | orchestrator | 2025-09-06 01:12:17 - disconnect routers 2025-09-06 01:12:17.707605 | orchestrator | 2025-09-06 01:12:17 - testbed 2025-09-06 01:12:18.644382 | orchestrator | 2025-09-06 01:12:18 - clean up subnets 2025-09-06 01:12:18.695707 | orchestrator | 2025-09-06 01:12:18 - subnet-testbed-management 2025-09-06 01:12:18.871457 | orchestrator | 2025-09-06 01:12:18 - clean up networks 2025-09-06 01:12:19.071602 | orchestrator | 2025-09-06 01:12:19 - net-testbed-management 2025-09-06 01:12:19.346712 | orchestrator | 2025-09-06 01:12:19 - clean up security groups 2025-09-06 01:12:19.387789 | orchestrator | 2025-09-06 01:12:19 - testbed-node 2025-09-06 01:12:19.507074 | orchestrator | 2025-09-06 01:12:19 - testbed-management 2025-09-06 01:12:19.621354 | orchestrator | 2025-09-06 01:12:19 - clean up floating ips 2025-09-06 01:12:19.652770 | orchestrator | 2025-09-06 01:12:19 - 81.163.192.59 2025-09-06 01:12:19.990655 | orchestrator | 2025-09-06 01:12:19 - clean up routers 2025-09-06 01:12:20.093221 | orchestrator | 2025-09-06 01:12:20 - testbed 2025-09-06 01:12:21.380994 | orchestrator | ok: Runtime: 0:00:21.405807 2025-09-06 01:12:21.387041 | 2025-09-06 01:12:21.387198 | PLAY RECAP 2025-09-06 01:12:21.387305 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-06 01:12:21.387354 | 2025-09-06 01:12:21.523890 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-06 01:12:21.526327 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-06 01:12:22.254641 | 2025-09-06 01:12:22.254793 | PLAY [Cleanup play] 2025-09-06 01:12:22.270797 | 2025-09-06 01:12:22.270950 | TASK [Set cloud fact (Zuul deployment)] 2025-09-06 01:12:22.323551 | orchestrator | ok 2025-09-06 01:12:22.330435 | 2025-09-06 01:12:22.330592 | TASK [Set cloud fact (local deployment)] 2025-09-06 01:12:22.364386 | orchestrator | skipping: Conditional result was False 2025-09-06 01:12:22.374476 | 2025-09-06 01:12:22.374607 | TASK [Clean the cloud environment] 2025-09-06 01:12:23.493272 | orchestrator | 2025-09-06 01:12:23 - clean up servers 2025-09-06 01:12:23.963777 | orchestrator | 2025-09-06 01:12:23 - clean up keypairs 2025-09-06 01:12:23.980085 | orchestrator | 2025-09-06 01:12:23 - wait for servers to be gone 2025-09-06 01:12:24.025586 | orchestrator | 2025-09-06 01:12:24 - clean up ports 2025-09-06 01:12:24.100391 | orchestrator | 2025-09-06 01:12:24 - clean up volumes 2025-09-06 01:12:24.166460 | orchestrator | 2025-09-06 01:12:24 - disconnect routers 2025-09-06 01:12:24.200712 | orchestrator | 2025-09-06 01:12:24 - clean up subnets 2025-09-06 01:12:24.224262 | orchestrator | 2025-09-06 01:12:24 - clean up networks 2025-09-06 01:12:24.865443 | orchestrator | 2025-09-06 01:12:24 - clean up security groups 2025-09-06 01:12:24.903898 | orchestrator | 2025-09-06 01:12:24 - clean up floating ips 2025-09-06 01:12:24.927882 | orchestrator | 2025-09-06 01:12:24 - clean up routers 2025-09-06 01:12:25.410517 | orchestrator | ok: Runtime: 0:00:01.812815 2025-09-06 01:12:25.414122 | 2025-09-06 01:12:25.414301 | PLAY RECAP 2025-09-06 01:12:25.414421 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-06 01:12:25.414482 | 2025-09-06 01:12:25.532407 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-06 01:12:25.534806 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-06 01:12:26.377768 | 2025-09-06 01:12:26.377911 | PLAY [Base post-fetch] 2025-09-06 01:12:26.393022 | 2025-09-06 01:12:26.393145 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-06 01:12:26.458636 | orchestrator | skipping: Conditional result was False 2025-09-06 01:12:26.472477 | 2025-09-06 01:12:26.472686 | TASK [fetch-output : Set log path for single node] 2025-09-06 01:12:26.537024 | orchestrator | ok 2025-09-06 01:12:26.549791 | 2025-09-06 01:12:26.550135 | LOOP [fetch-output : Ensure local output dirs] 2025-09-06 01:12:27.048174 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/logs" 2025-09-06 01:12:27.328798 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/artifacts" 2025-09-06 01:12:27.606160 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/71ef850d7ba44e3781b4c25afea98073/work/docs" 2025-09-06 01:12:27.630563 | 2025-09-06 01:12:27.630723 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-06 01:12:28.537401 | orchestrator | changed: .d..t...... ./ 2025-09-06 01:12:28.537825 | orchestrator | changed: All items complete 2025-09-06 01:12:28.537887 | 2025-09-06 01:12:29.267666 | orchestrator | changed: .d..t...... ./ 2025-09-06 01:12:29.985849 | orchestrator | changed: .d..t...... ./ 2025-09-06 01:12:30.016123 | 2025-09-06 01:12:30.016273 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-06 01:12:30.053786 | orchestrator | skipping: Conditional result was False 2025-09-06 01:12:30.056407 | orchestrator | skipping: Conditional result was False 2025-09-06 01:12:30.075272 | 2025-09-06 01:12:30.075368 | PLAY RECAP 2025-09-06 01:12:30.075429 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-06 01:12:30.075463 | 2025-09-06 01:12:30.198637 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-06 01:12:30.201114 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-06 01:12:30.944434 | 2025-09-06 01:12:30.944638 | PLAY [Base post] 2025-09-06 01:12:30.959262 | 2025-09-06 01:12:30.959393 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-06 01:12:31.921512 | orchestrator | changed 2025-09-06 01:12:31.928794 | 2025-09-06 01:12:31.928897 | PLAY RECAP 2025-09-06 01:12:31.928961 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-06 01:12:31.929025 | 2025-09-06 01:12:32.043709 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-06 01:12:32.046130 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-06 01:12:32.849098 | 2025-09-06 01:12:32.849266 | PLAY [Base post-logs] 2025-09-06 01:12:32.859979 | 2025-09-06 01:12:32.860114 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-06 01:12:33.321804 | localhost | changed 2025-09-06 01:12:33.334921 | 2025-09-06 01:12:33.335081 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-06 01:12:33.370620 | localhost | ok 2025-09-06 01:12:33.373710 | 2025-09-06 01:12:33.373811 | TASK [Set zuul-log-path fact] 2025-09-06 01:12:33.399628 | localhost | ok 2025-09-06 01:12:33.411306 | 2025-09-06 01:12:33.411433 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-06 01:12:33.437736 | localhost | ok 2025-09-06 01:12:33.443786 | 2025-09-06 01:12:33.443954 | TASK [upload-logs : Create log directories] 2025-09-06 01:12:33.931263 | localhost | changed 2025-09-06 01:12:33.937039 | 2025-09-06 01:12:33.937205 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-06 01:12:34.429112 | localhost -> localhost | ok: Runtime: 0:00:00.007163 2025-09-06 01:12:34.438391 | 2025-09-06 01:12:34.438598 | TASK [upload-logs : Upload logs to log server] 2025-09-06 01:12:35.015512 | localhost | Output suppressed because no_log was given 2025-09-06 01:12:35.018620 | 2025-09-06 01:12:35.018775 | LOOP [upload-logs : Compress console log and json output] 2025-09-06 01:12:35.069804 | localhost | skipping: Conditional result was False 2025-09-06 01:12:35.074949 | localhost | skipping: Conditional result was False 2025-09-06 01:12:35.090484 | 2025-09-06 01:12:35.090803 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-06 01:12:35.136629 | localhost | skipping: Conditional result was False 2025-09-06 01:12:35.137215 | 2025-09-06 01:12:35.140357 | localhost | skipping: Conditional result was False 2025-09-06 01:12:35.154670 | 2025-09-06 01:12:35.155012 | LOOP [upload-logs : Upload console log and json output]